forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
EUeNr3e8AV | R2Det: Exploring Relaxed Rotation Equivariance in 2D Object Detection | [
"Zhiqiang Wu",
"Yingjie Liu",
"Hanlin Dong",
"Xuan Tang",
"Jian Yang",
"Bo Jin",
"Mingsong Chen",
"Xian Wei"
] | Group Equivariant Convolution (GConv) empowers models to explore underlying symmetry in data, improving performance. However, real-world scenarios often deviate from ideal symmetric systems caused by physical permutation, characterized by non-trivial actions of a symmetry group, resulting in asymmetries that affect the outputs, a phenomenon known as Symmetry Breaking. Traditional GConv-based methods are constrained by rigid operational rules within group space, assuming data remains strictly symmetry after limited group transformations. This limitation makes it difficult to adapt to Symmetry-Breaking and non-rigid transformations. Motivated by this, we mainly focus on a common scenario: Rotational Symmetry-Breaking. By relaxing strict group transformations within Strict Rotation-Equivariant group $\mathbf{C}_n$, we redefine a Relaxed Rotation-Equivariant group $\mathbf{R}_n$ and introduce a novel Relaxed Rotation-Equivariant GConv (R2GConv) with only a minimal increase of $4n$ parameters compared to GConv. Based on R2GConv, we propose a Relaxed Rotation-Equivariant Network (R2Net) as the backbone and develop a Relaxed Rotation-Equivariant Object Detector (R2Det) for 2D object detection. Experimental results demonstrate the effectiveness of the proposed R2GConv in natural image classification, and R2Det achieves excellent performance in 2D object detection with improved generalization capabilities and robustness. The code is available in \texttt{https://github.com/wuer5/r2det}. | [
"Relaxation",
"Rotation",
"Equivariance"
] | Accept (Poster) | https://openreview.net/pdf?id=EUeNr3e8AV | https://openreview.net/forum?id=EUeNr3e8AV | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yETzagsnjX",
"vfzhMCAb94",
"uY6Ze3oyl2",
"uT75UJQxdW",
"uR4bmLxhsG",
"rrdv56L4Wh",
"pBJaM71rXR",
"lCYZ2N2zUS",
"isrGJci5OE",
"iG26U03SpV",
"dolSCUAnUz",
"Nn24mjFABz",
"KvasNnZkB3",
"KRUXz3f4wE",
"JZMEAmX7Fn",
"JTNKDZnANz",
"HrsZUsvY8c",
"EjlirDY7AC",
"EA4AmQyfW8",
"BOMbP99S8V",
"AbcUqA9CgK",
"5yWBVpAF8w",
"3XWVvFbAr2",
"3KPAtBBQUK",
"04tQSLX3KO"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1733149157681,
1731838371038,
1732452235452,
1730643496340,
1731838892826,
1731851722182,
1731838457829,
1730299021844,
1732093107955,
1731838759240,
1732093050587,
1734506259117,
1731839254075,
1732092892529,
1732452286782,
1733148383970,
1732862887540,
1730625945741,
1732850399358,
1737523672158,
1732098465938,
1731838316496,
1731839385510,
1731838716641,
1732102567029
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Reviewer_e3oq"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Reviewer_AppE"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Area_Chair_FFqm"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Reviewer_L1nv"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Reviewer_L1nv"
],
[
"ICLR.cc/2025/Conference/Submission4933/Reviewer_e3oq"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission4933/Reviewer_AppE"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4933/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Dear Reviewer L1nv,\\n\\nWe are delighted to hear that your concerns have been thoroughly addressed. Thank you for your efforts in reviewing the manuscript and for recognizing our work.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"title\": \"Author Rebuttal to Reviewer AppE: Part 2\", \"comment\": \"**Answer to Weakness 1.2**:\\nIntroducing the learnable perturbation $\\\\Delta$ may lead to the misconception that R2Det is unpredictable, but our R2Det is indeed predictable.\\nThe reason for this misunderstanding might stem from the belief that $\\\\Delta$ is unpredictable. \\n\\n**In fact, $\\\\Delta$ can be considered as implicitly predictable.**\\n**During the training phase, $\\\\Delta$ is updated end-to-end through gradient descent, a process determined by the training data.** \\n\\nOnce the model completes training, gradient updates cease, and the model's parameters are frozen (including $\\\\Delta$), thus making the entire model **a deterministic function.** \\n\\nTherefore, the output of a deterministic function is predictable under the transformation of the input,\\nwhich is consistent with the discussion of Relaxed Equivariance in (Kaba and Ravanbakhsh, 2023).\\n\\n**Answer to Weakness 1.4**:\\nOn $\\\\mathbf{C}_n$ group, define $\\\\mathbf{c}^i(\\\\cdot)$ as rotation of $\\\\cdot$ by $2\\\\pi i/n$, and \\n$\\\\mathbf{c}^{i+1}(\\\\cdot)=\\\\mathbf{c}^{(i+1)\\\\~\\\\text{mod}\\\\~n}(\\\\cdot)$.\\nWe have the following conclusion (Note: for simplicity, we ignore the input and output channels):\\n\\n**1. $\\\\mathbf{C}_n$-GConv (Vanilla Group Convolution) is a strict rotation-equivariant block, proven as follows:**\\n\\nGiven input $x$ and initial weight $\\\\psi$, we obtain the strict rotation-equivariant filter\\n$\\\\psi_i^{\\\\text{strict}}=\\\\mathbf{c}^i(\\\\psi)$ in the $i$-order of $\\\\mathbf{C}\\\\_n$.\\nThen $\\\\mathbf{C}\\\\_n$-GConv can be defined as:\\n$f_1(x)=\\\\sum_{i=0}^{n-1}{x} * \\\\psi_i^{\\\\text{strict}}\\n=\\\\sum_{i=0}^{n-1}{x} * \\\\mathbf{c}^i(\\\\psi)$.\\n\\nFor any $j\\\\in\\\\\\\\{0,1,2,\\\\cdots,n-1\\\\\\\\}$, we have:\\n\\n- $f_1(\\\\mathbf{c}^j(x))=\\\\sum_{i=0}^{n-1}{\\\\mathbf{c}^j(x)} * \\\\mathbf{c}^i(\\\\psi)$\\n\\n- $\\\\mathbf{c}^j(f\\\\_1(x))=\\\\mathbf{c}^j(\\\\sum_{i=0}^{n-1}{x} * \\\\mathbf{c}^i(\\\\psi))=\\\\sum_{i=0}^{n-1}{\\\\mathbf{c}^j(x)} * \\\\mathbf{c}^{i+j}(\\\\psi)\\n=\\\\sum_{i=0}^{n-1}{\\\\mathbf{c}^j(x)} * \\\\mathbf{c}^{(i+j)\\\\~\\\\text{mod}\\\\~n}(\\\\psi)=\\\\sum_{i=0}^{n-1}{\\\\mathbf{c}^j(x)} * \\\\mathbf{c}^i(\\\\psi)$.\\n\\nTherefore, $f\\\\_1(\\\\mathbf{c}^j(x))=\\\\mathbf{c}^j(f_1(x))$. According to Eq. (1) in the paper, $\\\\mathbf{C}_n$-GConv is equivariant.\\n\\n**2. $\\\\mathbf{C}_n$-R2GConv (Ours) is a relaxed rotation-equivariant block, proven as follows:**\\n\\nGiven input $x$, initial weight $\\\\psi$, an affine transformation function $\\\\mathbf{t}$\\nand the learnable perturbation $\\\\Delta$, we obtain the relaxed rotation-equivariant filter\\n$\\\\psi_i^{\\\\text{relaxed}}=\\\\mathbf{t}^i(\\\\psi,\\\\Delta)$ in the $i$-order of $\\\\mathbf{C}\\\\_n$.\", \"then_cn_r2gconv_can_be_defined_as\": \"$f_2(x)=\\\\sum_{i=0}^{n-1}{x} * \\\\psi_i^{\\\\text{relaxed}}=\\\\sum_{i=0}^{n-1}{x} * \\\\mathbf{c}^i(\\\\psi,\\\\Delta)$.\\n\\nFor any $j\\\\in\\\\\\\\{0,1,2,\\\\cdots,n-1\\\\\\\\}$, we have:\\n- $f_2(\\\\mathbf{c}^j(x))=\\\\sum_{i=0}^{n-1}{\\\\mathbf{c}^j(x)} * \\\\mathbf{t}^i(\\\\psi,\\\\Delta)$\\n\\n- $\\\\mathbf{c}^j(f_2(x))=\\\\mathbf{c}^j(\\\\sum_{i=0}^{n-1}{x} * \\\\mathbf{t}^i(\\\\psi,\\\\Delta))=\\\\sum_{i=0}^{n-1}{\\\\mathbf{c}^j(x)} * \\\\mathbf{c}^j(\\\\mathbf{t}^i(\\\\psi,\\\\Delta))$.\\n\\nTherefore, $||f_2(\\\\mathbf{c}^j(x)) - \\\\mathbf{c}^j(f_2(x))|| = ||\\\\sum_{i=0}^{n-1}{\\\\mathbf{c}^j(x)} * \\\\mathbf{t}^i(\\\\psi,\\\\Delta) - \\\\sum_{i=0}^{n-1}{\\\\mathbf{c}^j(x)} * \\\\mathbf{c}^j(\\\\mathbf{t}^i(\\\\psi,\\\\Delta))||\\n=||\\\\sum_{i=0}^{n-1}{\\\\mathbf{c}^j(x)} * (\\\\mathbf{t}^i(\\\\psi,\\\\Delta)-\\\\mathbf{c}^j(\\\\mathbf{t}^i(\\\\psi,\\\\Delta)))|| \\\\le \\\\epsilon$.\\nAccording to Eq. (4) in the latest revised version, $\\\\mathbf{C}_n$-R2GConv is a relaxed rotation-equivariant block.\\nIn particular, when $\\\\Delta=0$, we have $\\\\mathbf{t}^i(\\\\psi,\\\\Delta)=\\\\mathbf{c}^i(\\\\psi)$, thus $||f_2(\\\\mathbf{c}^j(x)) - \\\\mathbf{c}^j(f_2(x))||=0$,\\ni.e., $\\\\epsilon=0$. Therefore, $f_2$ is strict rotation-equivariant when $\\\\Delta=0$.\\n\\nIn conclusion, we have mathematically proved that **$\\\\mathbf{C}_n$-R2GConv is a relaxed rotation-equivariant block.** \\nWe have included this proof in **Appendix A.10** of the latest revised version.\"}",
"{\"title\": \"Looking forward to your valuable reply\", \"comment\": \"Dear Reviewer e3oq,\\n\\nSorry to bother you again. \\n\\nAs the rebuttal phase nears the end, we would like to know if we have addressed your concerns.\\n\\nIf you have any remaining concerns, please let us know. We look forward to your valuable reply.\\n\\nThank you for your efforts in our paper.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"summary\": \"This paper introduces R2Det, a novel object detection model that explores Relaxed Rotation-Equivariance (RRE) to handle real-world scenarios where strict rotational symmetries are often violated. RRE is incorporated into group convolution by introducing the Relaxed Rotation-Equivariant Filter (R2EFilter) and the Relaxed Rotation-Equivariant Group Convolution (R2GConv). The paper further proposes R2Net for image feature extraction and R2Det for 2D object detection, achieving improved convergence and performance with fewer parameters.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This work makes a valuable contribution to the field of object detection by addressing the limitations of traditional, strictly rotation-equivariant models and exploring the potential of RRE through ER2GConv.\", \"This work presents a significant contribution to the field of object detection by addressing a crucial limitation in handling real-world scenarios, where strict rotational symmetries are rarely observed. The authors introduce a novel approach, Relaxed Rotation-Equivariance (RRE), which effectively addresses this limitation. The proposed R2Det model leverages RRE to achieve remarkable performance with a significantly reduced parameter count compared to other leading models. This showcases the model's efficiency and its ability to achieve high accuracy while requiring fewer computational resources. The paper further strengthens its arguments with a rigorous mathematical framework, providing theoretical underpinnings for RRE's effectiveness. Moreover, the plug-and-play nature of the proposed ER2GConv layer allows for seamless integration into existing object detection models, making it a versatile and readily applicable technique. This combination of novelty, theoretical soundness, efficiency, and integration capabilities makes this research highly valuable for advancing the field of object detection.\", \"However, the paper lacks sufficient exploration of the key parameter b, which controls the perturbation factor \\u0394. While the paper mentions that b=0.1 yields the best results, more extensive experiments with intermediate values of b would strengthen the argument for the necessity of this perturbation parameter and provide a deeper understanding of its influence on performance. Additionally, it would be beneficial to investigate the performance of R2Det in larger configurations, such as \\u201cR2Det-L\\u201d, and compare its performance to the corresponding \\u201cLarge\\u201d versions of YOLO models. This would provide a more complete assessment of the model's scalability and potential limitations in handling more complex and computationally demanding tasks. The clarity and persuasiveness of the paper can be further enhanced by addressing these specific concerns, and by providing concise explanations for abbreviations like SO(2) (Special Orthogonal Group in 2D space) and ER2GCBA (Efficient Relaxed Rotation-Equivariant Group Convolution plus -BA, where -BA likely refers to a specific architectural component or technique).\", \"This work makes a valuable contribution to the field of object detection by addressing the limitations of traditional, strictly rotation-equivariant models and exploring the potential of RRE through ER2GConv.\", \"The authors introduce a novel approach, Relaxed Rotation-Equivariance (RRE), which effectively addresses the limitation of handling real-world scenarios where strict rotational symmetries are rarely observed.\", \"The proposed R2Det model leverages RRE to achieve remarkable performance with a significantly reduced parameter count compared to other leading models, showcasing the model's efficiency and ability to achieve high accuracy while requiring fewer computational resources.\", \"The paper further strengthens its arguments with a rigorous mathematical framework, providing theoretical underpinnings for RRE's effectiveness.\", \"The plug-and-play nature of the proposed ER2GConv layer allows for seamless integration into existing object detection models, making it a versatile and readily applicable technique.\"], \"weaknesses\": [\"The paper lacks sufficient exploration of the key parameter b, which controls the perturbation factor \\u0394, and more extensive experiments with intermediate values of b would strengthen the argument for the necessity of this perturbation parameter.\", \"** In page 8, Figure 4(a) and Table 1, the results presented demonstrate a minimal improvement in AP when (b=0.1) compared to (b=0).\", \"** Performance deteriorates when (b>0.1), and it would be beneficial to conduct more thorough experiments, especially within the interval [0, 0.1], e.g., 0.02, 0.05, to provide a more definitive analysis of the value of b and its impact on the model\\u2019s performance.\", \"It would be beneficial to provide concise explanations for abbreviations like SO(2) (Special Orthogonal Group in 2D space) and ER2GCBA (Efficient Relaxed Rotation-Equivariant Group Convolution plus -BA, where the meaning of -BA remains undefined).\", \"It would be beneficial to investigate the performance of R2Det in larger configurations, such as 'R2Det-L', and compare its performance to the corresponding 'large' versions of YOLO models, to provide a more complete assessment of the model's scalability and potential limitations in handling more complex and computationally demanding tasks.\", \"It would be beneficial to include a comparative study with more recent and advanced object detection models, such as YOLOv11 and other models, to provide a broader context and demonstrate the model's performance relative to the state-of-the-art.\", \"I noticed an interesting discrepancy in the results presented on page 8, specifically in Table 1 and Table 2. While both tables use the VOC test dataset, the reported AP scores for the SRE (b=0) model in Table 1 (C4) differ from the reported SRE scores in Table 2 (C4).\", \"** In Table 1, the AP50(%) and AP50:95(%) for b=0 are 83.8 and 64.4 respectively, whereas in Table 2, the SRE AP50(%) and AP50:95(%) for C4 are 82.9 and 64.2.\", \"** This discrepancy raises a question regarding potential differences in the implementation of SRE versus the b=0 setting, or if it could be due to variations in the experimental runs. The authors may please comment on this interesting observation and clarify the reasons behind the difference in AP scores.\"], \"questions\": [\"The paper lacks sufficient exploration of the key parameter b, which controls the perturbation factor \\u0394, and more extensive experiments with intermediate values of b would strengthen the argument for the necessity of this perturbation parameter.\", \"Performance deteriorates when (b>0.1), and it would be beneficial to conduct more thorough experiments, especially within the interval [0, 0.1], e.g., 0.02, 0.05, to provide a more definitive analysis of the value of b and its impact on the model\\u2019s performance.\", \"It would be beneficial to provide concise explanations for abbreviations like SO(2) (Special Orthogonal Group in 2D space) and ER2GCBA (Efficient Relaxed Rotation-Equivariant Group Convolution plus -BA, where the meaning of -BA remains undefined).\", \"It would be beneficial to investigate the performance of R2Det in larger configurations, such as 'R2Det-L', and compare its performance to the corresponding 'large' versions of YOLO models, to provide a more complete assessment of the model's scalability and potential limitations in handling more complex and computationally demanding tasks.\", \"It would be beneficial to include a comparative study with more recent and advanced object detection models, such as YOLOv11 and other models, to provide a broader context and demonstrate the model's performance relative to the state-of-the-art.\", \"This discrepancy raises a question regarding potential differences in the implementation of SRE versus the b=0 setting, or if it could be due to variations in the experimental runs. The authors may please comment on this interesting observation and clarify the reasons behind the difference in AP scores.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Author Rebuttal to Reviewer e3oq: Part 1\", \"comment\": \"**Thank you for your appreciation of our work and for taking the valuable time to review the manuscript. The following may address your concerns.**\\n\\n>**Weakness 1:**\\nThe paper lacks sufficient exploration of the key parameter b, which controls the perturbation factor \\u0394, and more extensive experiments with intermediate values of b would strengthen the argument for the necessity of this perturbation parameter. **In page 8, Figure 4(a) and Table 1, the results presented demonstrate a minimal improvement in AP when (b=0.1) compared to (b=0).** Performance deteriorates when (b>0.1), and it would be beneficial to conduct more thorough experiments, especially within the interval [0, 0.1], e.g., 0.02, 0.05, to provide a more definitive analysis of the value of b and its impact on the model's performance.\\n\\n**Answer:**\\nThank you very much for your suggestions. We conduct more comprehensive experiments regarding the hyperparameter $b$, and the results are as follows:\\n\\n|$b$|$\\\\text{AP}_{50}$|$\\\\text{AP}_{50:95}$|\\n|-|-|-|\\n|0|83.8|64.4|\\n|0.01|84.0|64.9|\\n|0.02|84.2|65.2|\\n|0.04|84.3|65.5|\\n|**0.06**|**84.3**|**65.6**|\\n|0.08|84.2|65.3|\\n|0.1|84.1|65.1|\\n|0.2|83.5|64.3|\\n|0.4|83.6|64.4|\\n|0.6|82.4|62.6|\\n|0.8|80.7|59.7|\\n\\n**First of all, we need to emphasize that $b$ is a hyperparameter that sets a Uniform distribution $\\\\mathcal{U}(-b,b)$ for $\\\\Delta$. In fact, the RRE ($b=0$) model is not equivalent to the SRE model.**\\n\\nWhen $b=0$, all initial values of $\\\\Delta$ are $0$. At this moment, the RRE ($b=0$) model is equivalent to the SRE model, but $\\\\Delta$ still undergoes end-to-end updates with gradient descent. After that moment, the RRE ($b=0$) model is not equivalent to the SRE model.\\n\\nFor this experiment, we intend to explore the impact of the initial values of $\\\\Delta$ on the model performance.\\n\\nFrom the table, it can be seen that when $b\\\\in[0.01, 0.1]$, the model's performance is higher than that of $b=0$, and they all converge to similar results, especially when $b=0.06$, the model performance is the highest.\\n\\nWhen $b=0$, meaning all initial values of $\\\\Delta$ are set to $0$, the updates of $\\\\Delta$ lack an initial push for convergence, resulting in the model being lower than models with a small initial push.\\n\\nWhen $b=0.2$ or $b=0.4$, the model performance starts to decrease, and when $b=0.6$ or $0.8$, the model performance decreases significantly. It can be inferred that when $b>0.1$ and as $b$ increases, the model performance will decrease significantly. We speculate that excessive disturbance may lead to the model updating towards incorrect gradients, resulting in decreased model performance.\\n\\nFrom the above experiments, we can conclusions:\\nThe initial values of $\\\\Delta$ have a significant impact on the model. Providing $\\\\Delta$ with small initial values (such as $b=0.02, 0.04, 0.06$) is beneficial for the model to converge better. However, providing $\\\\Delta$ with large initial values (such as $b=0.6, 0.8$) is not conducive to better convergence of the model. This conclusion can be found in Figure 4(a) of the paper.\"}",
"{\"title\": \"Author Rebuttal to Reviewer AppE: Part 4\", \"comment\": \"> **Question 2:** Experiments showed that on both PASCAL VOC and COCO, performance with strict equivariance is inferior to that with the approximate equivariance introduced in the paper. For intuition, could the authors provide specific (intuitive or experimental) examples of 2D object detection in which it is beneficial to break the strict rotation equivariance?\\n\\n**Answer to Question 2:**\\nThank you for your suggestions, which will help improve the quality of our paper again.\\n\\nWe have adopted your suggestions in the latest revised version and added a 2D object detection case in **Section 4.3** of the paper to illustrate the advantages of relaxed rotation-equivariance.\\n\\nFinally, we aim to prove that by allowing some flexibility in the equivariance constraints, our model can better capture the anisotropic nature of objects and their contexts, leading to improved performance in various 2D object detection tasks, finally reducing false positives and improving the overall detection accuracy. You can refer to the latest revised version for this example.\\n\\n\\n> **Details Of Ethics Concerns:**\\nPotential (but unlikely) plagiarism: the submission is very similar to a preprint that has been on arXiv [1] since August 2024, and Figure 8 even mistakenly re-uses the method name from the ArXiv paper. It would be useful to verify that the authors of the ICLR paper and arXiv paper are the same, which would mean that they just changed their own paper title and method name.\\n\\n**Response to Details Of Ethics Concerns:** In fact, the current submitted paper (\\\"R2Det: Exploring Relaxed Rotation Equivariance in 2D Object Detection\\\") is our original work and does not violate any ICLR principles. We have clarified this with PC/SPC/AC, and if you have any questions, please feel free to contact with them.\\n\\n**Thank you again for your review. We look forward to your valuable and timely response, and we are willing to address all your concerns.**\"}",
"{\"title\": \"Author Rebuttal to Reviewer AppE: Part 3\", \"comment\": \"> **Weakness 2**\\nThe paper is not easy to read, mainly because of: the abundant use of abbreviations and acronyms (ENN, GConv, RRE, SRE, NRE, R2Filter, R2Lift, R2GConv, DR2GConv, PR2GConv, ER2GConv, ER2GCBA, ...), multiple typos (\\\"Rotationa-Equivariant\\\" line 25, \\\"we further exploring\\\" line 62, \\\"More analysis can refer to\\\" line 167, \\\"converge when 66-epoch\\\" and \\\"converges at about 198 in the epoch\\\" lines 401-402, ...) and over-loaded illustrations (e.g., Figure 2 and 3).\\n\\n**Answer to Weakness 2:**\\nSorry for the confusion.\\n\\nThank you very much for your suggestion, which will contribute to the quality of our paper.\\nIn the latest revised version, we have adopted your suggestion to redefine these concepts (e.g., R2Lift, R2GConv, DR2GConv, PR2GConv, ER2GConv, ER2GCBA,...), and redrawn Figures 2 and 3.\\n\\nWe have carefully revised these typos and thoroughly examined other typos.\\nYou can check the latest revised version, and the blue color represents the content we have modified or added.\\n\\n> **Question 1.1:**\\nThe performance gap between different n-norder cyclic rotation groups is substantial (Table 3). \\nWhat is the reason for such a gap? Is it only due to the \\\"newer equivariant angles\\\"? \\n\\n**Answer to Question 1.1:**\\nR2GConv (or GConv) uses shared convolutional filters through rotational transformations, enabling it to capture rotation-equivariant features. It can learn these features at different rotation angles to enhance the model's robustness.\\n\\nIn Table 3 of the paper, the performance gap between different orders of cyclic rotation groups is primarily due to the introduction of new equivariant angles.\\nThese additional equivariant angles can offer richer rotation-equivariant features, thereby improving the model's accuracy in detection tasks.\\n\\nAlthough as $n$ increases on the group, the model captures more diverse rotation-equivariant features, enhancing performance, this improvement should have an upper limit, with a substantial increase in parameters.\\nGenerally, the $\\\\mathbf{C}_4$ group provides the optimal balance between performance and parameter efficiency.\\n\\n> **Question 1.2:**\\nHow would the introduced architecture perform without equivariance? \\n\\n**Answer to Question 1.2:**\\nWe replace all R2GConvs including any variants in R2Det with vanilla Convs while keeping all model parameters consistent. The experimental results on the VOC dataset are as follows:\\n|Type|$\\\\text{AP}_{50}$|$\\\\text{AP}_{50:90}$|Params.|\\n|-|-|-|-|\\n|w/o equivariance|78.6|57.7|3.2M|\\n|w/ $\\\\mathbf{C_4}$ strict rotation-equivariance (SRE)|82.9|64.2|2.6M|\\n|w/ $\\\\mathbf{C_4}$ relaxed rotation-equivariance (RRE, Ours)|**84.1**|**65.1**|2.6M|\\n\\nFrom the table, it is evident that without $\\\\mathbf{C}_4$ (relaxed or strict) rotation-equivariance, the model's performance significantly decreases. This is mainly because vanilla Conv does not possess rotation-equivariance, thus failing to capture objects' rotation-equivariance.\\nThis also indicates that the performance of our R2Det model is not caused by the introduced architecture.\\n\\n> **Question 1.3:**\\nAlso, is there an intuitive explanation for why it is beneficial for 2D object detection (with non-rotating bounding boxes, like on COCO) \\nto use C8 instead of C4, despite the output (bounding boxes) only having symmetries in the C4 group?\\n\\n**Answer to Question 1.3:**\\nIn our R2Det, we utilize the YOLOv8 detection head, where the output (bounding boxes) is generated through the following steps:\\n- Classifying image pixel points for prediction to select candidate points.\\n- Generating multiple detection boxes (vertical or horizontal) based on these candidate points.\\n- Applying non-maximum suppression (NMS) to eliminate overlapping regions, thereby obtaining the detection results.\\n\\nDuring the feature extracting process, using $\\\\mathbf{C_8}$ instead of $\\\\mathbf{C_4}$ can provide finer-grained orientation sensitivity, improved feature representation, better capture of contextual information, robustness to rotation variations, and more effective training dynamics.\\n\\nSince the detection head is **anchor-free** and **does not directly involve concepts of symmetry or equivariance in the output (bounding boxes)**, the primary focus of the detection process lies in the classification of image pixel points and the application of Non-Maximum Suppression (NMS) based on the image features. \\n\\nThe features derived from the $\\\\mathbf{C_8}$ group bring richer geometric information about objects in images compared to the $\\\\mathbf{C_4}$ group to the detection head of the model, which finally contributes to more accurate and robust object detection.\\n\\nIn fact, our work has always been exploring (relaxed or strict) rotation-equivariant properties on natural image datasets, to promote the performance of **a universal object detection algorithm (e.g., YOLO series, DETR series) that utilizes these properties.**\"}",
"{\"summary\": \"The paper first discusses a limitation of strict equivariance for real-world applications (i.e. symmetry breaking) and then introduces building blocks and architectures for relaxed rotation equivariance. It focuses mostly on 2D object detection.\\nMore specifically, the authors introduce a relaxation of the group operations on the $n$-order cyclic rotation group $C_n$ by adding learnable perturbations to existing (strict) rotation equivariant filters. They then design different relaxed rotation-equivariant group convolutional modules that serve as building blocks for two architectures: the relaxed rotation-equivariant network (R2Net) and object detector (R2Det).\\n\\nExperimentally, the paper compares different perturbation levels and shows that a small perturbation leads to stronger performance than strict equivariance (i.e., with no perturbation) on 2D detection datasets (PASCAL VOC and COCO). The new method is also compared to the YOLO series detectors and achieves much higher performance-compute trade-offs. Finally, the R2Net and R2Det methods are evaluated respectively on image classification and instance segmentation.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The method demonstrates impressive performance-compute trade-offs on the object detection task.\", \"The authors evaluated the effect of the main contribution (i.e., the learnable perturbation) on performance. Furthermore, its effect on the learned features is well illustrated in Figure 5.\"], \"weaknesses\": \"1. The provided background and naming of methods are misleading and partially incorrect. Indeed, **relaxed** equivariance is defined in (Kaba and Ravanbakhsh, 2023) as a relaxation that allows breaking the symmetry of inputs and mapping to arbitrary orbit types when necessary. Note that the output of the function is still predictable under the transformation of the input. While in R2Det, the relaxed equivariance is mistakenly defined (definition 1, line 161) using the definition of **$\\\\epsilon$-approximate** equivariance (despite referring to the definition from (Wang et al., 2022a) which correctly names it **$\\\\epsilon$-approximate** equivariance).\\nFurthermore, the introduced filter is called a \\\"relaxed rotation-equivariant filter\\\" but is implemented by allowing for some learnable perturbation, which is therefore **NOT** a **relaxed** rotation-equivariance module. Figure 1a also incorrectly illustrates the problem being tackled in the paper. Notably, relaxed equivariance was introduced as an alternative to noise-injection methods: \\\"offering an alternative to the noise-injection methods\\\" (see the abstract from Kaba and Ravanbakhsh, 2023).\\nThe above-mentioned problems make the paper's claims incorrect and could lead to important misunderstandings of already established concepts. \\n2. The paper is not easy to read, mainly because of: the abundant use of abbreviations and acronyms (ENN, GConv, RRE, SRE, NRE, R2Filter, R2Lift, R2GConv, DR2GConv, PR2GConv, ER2GConv, ER2GCBA, ...), multiple typos (\\\"Rotationa-Equivariant\\\" line 25, \\\"we further exploring\\\" line 62, \\\"More analysis can refer to\\\" line 167, \\\"converge when 66-epoch\\\" and \\\"converges at about 198 in the epoch\\\" lines 401-402, ...) and over-loaded illustrations (e.g., Figure 2 and 3).\", \"questions\": [\"The performance gap between different $n$-norder cyclic rotation groups is substantial (Table 3). What is the reason for such a gap? Is it only due to the \\\"newer equivariant angles\\\"? How would the introduced architecture perform without equivariance? Also, is there an intuitive explanation for why it is beneficial for 2D object detection (with non-rotating bounding boxes, like on COCO) to use $C_8$ instead of $C_4$, despite the output (bounding boxes) only having symmetries in the $C_4$ group?\", \"Experiments showed that on both PASCAL VOC and COCO, performance with strict equivariance is inferior to that with the approximate equivariance introduced in the paper. For intuition, could the authors provide specific (intuitive or experimental) examples of 2D object detection in which it is beneficial to break the strict rotation equivariance?\", \"I am willing to raise my rating if my concerns are addressed.\"], \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"Potential (but unlikely) plagiarism: the submission is very similar to a preprint that has been on arXiv [1] since August 2024, and Figure 8 even mistakenly re-uses the method name from the ArXiv paper. It would be useful to verify that the authors of the ICLR paper and arXiv paper are the same, which would mean that they just changed their own paper title and method name.\\n\\n[1]: \\\"Wu, Zhiqiang, et al. \\\"SBDet: A Symmetry-Breaking Object Detector via Relaxed Rotation-Equivariance.\\\" arXiv preprint arXiv:2408.11760 (2024).\\\".\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Kind Reminder to Reviewer AppE\", \"comment\": \"Dear Reviewer AppE,\\n\\nThank you for your contribution to the review process of the ICLR25 community.\\n\\nSince we have earnestly addressed your concerns in our rebuttal responses, we look forward to your valuable and timely responses, which are very important to this work.\\n\\nIf you have any other questions, please don't hesitate to contact us anytime.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"title\": \"Author Rebuttal to Reviewer L1nv: Part 2\", \"comment\": \"> **Question:**\\nIn Figure 1 on the left, the input in the lower left corner mapped to the output in the upper right corner should use the relaxed rotation-equivariant function.\\n\\n**Answer 3:** In fact, it is correct to map low symmetry to high symmetry using rotation-equivariant functions.\\n\\nDefine the function $Sym(\\\\cdot)$, which denotes the degree of symmetry of $\\\\cdot$, and \\nthe strict equivariant function $f_{\\\\text{strict}}$ with input $x$.\\n\\nAccording to the **Curie principle** [1], we have the following conclusion:\\n$Sym(x) \\\\leq Sym(f_{\\\\text{strict}}(x))$.\\n\\nTherefore, the strict equivariant function cannot map the input to lower symmetry since its output should at least have the same symmetry as the input, which is known as the Symmetry-Breaking problem.\\n\\nOn the other hand, the strict equivariant function can map the input to higher symmetry, as the input, after going through the strict equivariant function, can exhibit equivariance, thus achieving higher symmetry.\\n\\nDue to the lower symmetry present in real-world data, modeling it using a strictly equivariant function fails to map the input to lower symmetry, thereby deviating from the characteristics of real-world data and affecting feature learning.\\n\\nFor more details, please refer to Figure 1 in (Kaba and Ravanbakhsh, 2023) [1].\\n\\nConversely, the relaxed equivariant function can resolve the Symmetry-Breaking problem.\", \"the_rationale_is_simple\": \"using the relaxed equivariant function can relax the original symmetry of the input,\\nthereby mapping it to lower symmetry, aligning it with the characteristics of real-world data, and enhancing feature learning.\\n\\n\\n**References:**\\n\\n[1] Sekou-Oumar Kaba and Siamak Ravanbakhsh. Symmetry breaking and equivariant neural networks. \\n\\n**Thank you again for your review. We look forward to your valuable and timely response, and we are willing to address all your concerns.**\"}",
"{\"title\": \"Kind Reminder to Reviewer e3oq\", \"comment\": \"Dear Reviewer e3oq,\\n\\nThank you for your contribution to the review process of the ICLR25 community.\\n\\nSince we have earnestly addressed your concerns in our rebuttal responses, we look forward to your valuable and timely responses, which are very important to this work.\\n\\nIf you have any other questions, please don't hesitate to contact us anytime.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"metareview\": \"The paper introduces Relaxed Rotation-Equivariant GConv (R2GConv), addressing the limitations of traditional GConv models in handling symmetry-breaking, particularly rotational symmetry-breaking. Traditional GConv assumes strict equivariance, which fails to account for real-world deviations in symmetry. The proposed R2GConv relaxes the rotational transformations, introducing only minimal additional parameters.The authors apply R2GConv in the Relaxed Rotation-Equivariant Network (R2Net), and develop the R2Det object detector for 2D detection tasks. Experimental results demonstrate that R2GConv improves natural image classification, while R2Det achieves strong performance in 2D object detection with better generalization and robustness under symmetry-breaking conditions.\\n\\nAll reviews agree to accept this paper. The authors are required to update this paper when preparing the final version, considering the valuable reviews during the rebuttal period.\", \"additional_comments_on_reviewer_discussion\": \"All reviews agree to accept this paper. The authors are required to update this paper when preparing the final version, considering the valuable reviews during the rebuttal period.\"}",
"{\"title\": \"Author Rebuttal to Reviewer e3oq: Part 2\", \"comment\": \"> **Weakness 2:**\\nIt would be beneficial to provide concise explanations for abbreviations like SO(2) (Special Orthogonal Group in 2D space) and ER2GCBA (Efficient Relaxed Rotation-Equivariant Group Convolution plus -BA, where the meaning of -BA remains undefined).\\n\\n**Answer 2:**\\nSorry for the confusion.\\n\\nThe SO(2) group is an infinite group that contains a set of all two-dimensional rotation angles.\\nIn fact, $\\\\mathbf{C}_n$ is a discrete subgroup of SO(2).\\nThe '-BA' denotes BatchNorm and Activation operation, which is a standard practice in convolutional neural networks.\\n\\nThank you very much for pointing out these issues. In the latest revised version, we have removed the notation '-BA' and provided detailed explanations of SO(2). Furthermore, **Reviewer AppE** has provided valuable suggestions regarding these abbreviations, and we have adopted a standardized notation. These suggestions have contributed to the high quality of the paper. Please see the latest revised version.\\n\\n> **Weakness 3:**\\nIt would be beneficial to investigate the performance of R2Det in larger configurations, such as 'R2Det-L', and compare its performance to the corresponding 'large' versions of YOLO models, to provide a more complete assessment of the model's scalability and potential limitations in handling more complex and computationally demanding tasks.\\n\\n**Answer 3:**\\nWe provide the results of R2Det-L on VOC and COCO datasets, as shown below:\\n|Model|Year|Dataset|$\\\\text{AP}_{50}$|$\\\\text{AP}_{50:95}$|FLOPs|Params.|\\n|-|-|-|-|-|-|-|\\n|RT-DETRv2-L|2024|COCO|71.6|53.4|136G|42M|\\n|YOLO11-L|2024|COCO|70.1|53.4|86.9G|25.3M|\\n|YOLO11-X|2024|COCO|71.6|54.7|194.9G|56.9M|\\n|R2Det-L|2024|COCO|**72.4**|**56.1**|28.3G|42.8M|\\n|R2Det-L|2024|VOC|88.3|71.9|28.1G|42.7M|\\n\\nThe table shows that R2Det-L achieves **state-of-the-art performance** in the COCO dataset compared to the latest models. We will add this latest result and comparison in the final version.\\n\\n> **Weakness 4:**\\nIt would be beneficial to include a comparative study with more recent and advanced object detection models, such as YOLOv11 and others, to provide a broader context and demonstrate the model's performance relative to the state-of-the-art.\\n\\n**Answer 4:**\\nIn fact, YOLO11 was only released on **September 27, 2024**, and we did not notice this latest model. You can view the latest results and comparisons in **Answer 3**.\\n\\n> **Weakness 5:**\\nI noticed an interesting discrepancy in the results presented on page 8, specifically in Table 1 and Table 2. While both tables use the VOC test dataset, the reported AP scores for the SRE (b=0) model in Table 1 (C4) differ from the reported SRE scores in Table 2 (C4). **In Table 1, the AP50(%) and AP50:95(%) for b=0 are 83.8 and 64.4 respectively, whereas in Table 2, the SRE AP50(%) and AP50:95(%) for C4 are 82.9 and 64.2.** This discrepancy raises a question regarding potential differences in the implementation of SRE versus the b=0 setting, or if it could be due to variations in the experimental runs. The authors may please comment on this interesting observation and clarify the reasons behind the difference in AP scores.\\n\\n\\n**Answer 5:** In fact, the model with $b=0$ in Table 1 corresponds to the RRE ($b=0$) model, not the SRE model in Table 2. The initial value of $\\\\Delta$ in the RRE ($b=0$) model is $0$, but it updates end-to-end with gradient descent. Therefore, they have different reported results. \\n\\nFor a detailed explanation, please see the explanation in **Weakness 1**.\\n\\n> **Answer to Questions part**: See the answers in **Weaknesses** part. \\n\\n**Thank you again for your review. We look forward to your valuable and timely response, and we are willing to address all your concerns.**\"}",
"{\"title\": \"Kind Reminder to Reviewer L1nv\", \"comment\": \"Dear Reviewer L1nv,\\n\\nThank you for your contribution to the review process of the ICLR25 community. \\n\\nSince we have earnestly addressed your concerns in our rebuttal responses, we look forward to your valuable and timely responses, which are very important to this work.\\n\\nIf you have any other questions, please don't hesitate to contact us anytime. \\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"title\": \"Looking forward to your valuable reply\", \"comment\": \"Dear Reviewer L1nv,\\n\\nSorry to bother you again. \\n\\nAs the rebuttal phase nears the end, we would like to know if we have addressed your concerns.\\n\\nIf you have any remaining concerns, please let us know. We look forward to your valuable reply.\\n\\nThank you for your efforts in our paper.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"comment\": \"Dear authors, thank you for your thorough responses, which addressed my concerns, I maintain my initial score.\"}",
"{\"comment\": \"Dear Reviewer e3oq,\\n\\nThank you very much for your efforts in reviewing the manuscript and for acknowledging our work.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"summary\": \"This paper points out that symmetry breaking often occurs in the real world. However, traditional GConv-based methods are limited by strict operational rules in group space, ensuring strict equivariance of features only under a limited set of group transformations, making them difficult to adapt. The paper defines the relaxed rotation-equivariant group R4 based on the strict rotation-equivariant group C4 and proposes the relaxed rotation GConv (R2GConv). The paper constructs R2Det using GConv and its derived convolutional structures, achieving excellent results on the PASCAL VOC and MS COCO 2017 datasets, and also verifies the good performance of R2Det in classification and segmentation tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces a novel relaxed rotation-equivariant group convolution (R2GConv), which extends existing equivariant neural networks (ENNs). Additionally, the resulting model, R2Det, shows strong performance across various datasets and tasks.\\n\\n2. The authors enhance the R2GConv module by incorporating depth-wise and point-wise convolution, and conduct extensive comparative and ablation experiments to confirm its positive impact on the outcomes.\\n\\n3. The paper is well-written and easy to understand.\", \"weaknesses\": \"1. Dataset Limitation: The selected datasets, PASCAL VOC and MS COCO 2017, do not emphasize rotation characteristics, which reduces the impact and relevance of the experimental results. To better highlight the effects of SRE and RRE modeling, rotation-specific object detection datasets should be used.\\n\\n2. Insufficient Baseline Comparison: it would be beneficial to include comparisons with established models in rotation object detection, such as ReDet and FRED, to strengthen the evaluation and provide more convincing evidence.\", \"questions\": \"1. In Figure 1 on the left, the input in the lower left corner mapped to the output in the upper right corner should use the relaxed rotation-equivariant function.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for the kindly reply. It addressed my concerns.I keep my scores for the acceptance of this paper.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"Dear authors,\\n\\nThank you for your detailed answer to the review.\", \"since_the_updated_version_and_the_rebuttal\": [\"provided additional evidence about the significance of the approach by giving a mathematical proof that $C_n$-R2GConv is a relaxed rotation-equivariant block, and by running an ablation study to show that the performance is not only due to the introduced architecture but also to rotation equivariance itself (in the answer to Question 1.2)\", \"fixed the partially incorrect definition and clarified my misunderstanding about whether or not the method was a case of relaxed rotation equivariance, and improved the preliminary section accordingly\", \"simplified the reading of the paper by reducing the number of new abbreviations and acronyms, providing easier-to-read illustrations (see Figures 2 and 3), and providing more intuition (in \\\"Visualization of rotational Symmetry-Breaking\\\", lines 517-530)\", \"scaled the approach to a \\\"large\\\" version to demonstrate an even more impressive performance\", \"I am willing to raise my rating significantly and recommend acceptance of the paper.\", \"In addition, I have a final question regarding the method: how fast is the approach at inference (in FPS)? Is it faster than YOLO-like models thanks to the reduced number of FLOPS or is it slow because of the \\\"custom\\\" operations? (Even if it is slower, I believe this is not necessarily a disadvantage of the method as it may still be optimized with more fine-tuned implementations)\"]}",
"{\"title\": \"Author Rebuttal to Reviewer AppE: Part 1\", \"comment\": \"**Thank you for taking the valuable time to review the manuscript. The following may address your concerns.**\\n\\n> **Weakness 1:**\\n> The provided background and naming of methods are misleading and partially incorrect. \\n> - **Weakness 1.1:** Indeed, relaxed equivariance is defined in (Kaba and Ravanbakhsh, 2023) as a relaxation that allows breaking the symmetry of inputs and mapping to arbitrary orbit types when necessary. \\n> - **Weakness 1.2:** Note that the output of the function is still predictable under the transformation of the input. \\n> - **Weakness 1.3:** While in R2Det, the relaxed equivariance is mistakenly defined (definition 1, line 161) using the definition of approximate equivariance (despite referring to the definition from (Wang et al., 2022a) which correctly names it $\\\\epsilon$-approximate equivariance). \\n> - **Weakness 1.4:** Furthermore, the introduced filter is called a \\\"relaxed rotation-equivariant filter\\\" but is implemented by allowing for some learnable perturbation, which is therefore NOT a relaxed rotation-equivariance module. \\n> - **Weakness 1.5:** Figure 1a also incorrectly illustrates the problem being tackled in the paper.\\n> - **Weakness 1.6:** Notably, relaxed equivariance was introduced as an alternative to noise-injection methods: \\\"offering an alternative to the noise-injection methods\\\" (see the abstract from Kaba and Ravanbakhsh, 2023). \\n> - **Weakness 1.7:** The above-mentioned problems make the paper's claims incorrect and could lead to important misunderstandings of already established concepts.\\n\\n**Answers to Weakness 1.1/1.3/1.5/1.6/1.7:**\\nSorry for the confusion. \\n\\nIn the revised version of our paper, we have explicitly provided the definitions of relaxed equivariance from (Kaba and Ravanbakhsh, 2023) and approximate equivariance from (Wang et al., 2022a). Please refer to the revised version of our paper.\\n\\nRelaxed equivariance is defined in (Kaba and Ravanbakhsh, 2023), where the concept of noise-injection is mentioned as **a method** to construct relaxed equivariant networks. The noise is added to the data before processing it through an equivariant network. \\n\\nRelaxed equivariance allows for breaking the symmetry of inputs and mapping to arbitrary orbit types when necessary. The key aspect is that the output of the function remains predictable under the transformation of the input. This definition emphasizes flexibility in handling symmetries while maintaining predictability. \\n\\nOn the other hand, approximate equivariance, as defined by Wang et al. (2022a), focuses on the similarity in the output under the same group transformation. It allows for small deviations in the output, making it more practical for real-world applications.\\n\\n**In fact, approximate equivariance is a case of relaxed equivariance, and they are actually solving the same problem, i.e., the symmetry-breaking. The definition of relaxation is broader, but it is highly abstract and difficult to measure the degree of relaxation; Wang et al. define approximate equivariance from the computable perspective of L2 norm, using $\\\\epsilon$ to measure the degree of relaxation, which is more applicable.**\\n\\nTherefore, in this work, we assume that the symmetry-breaking encountered in visual image data under the rotation group, as introduced by Kaba and Ravanbakhsh (2023), also satisfies the definition of approximate equivariance from Wang et al. (2022a). This assumption allows us to introduce and implement the \\\"relaxed rotation-equivariant filter\\\" while retaining the benefits of both relaxed and approximate equivariance.\\n\\nBased on the above definitions, we proposed a relaxed rotation-equivariant filter. This filter is designed to allow for some learnable perturbation, which is crucial for maintaining the flexibility of the relaxed equivariance framework. \\n\\nIn the latest revised version, we have clarified these issues, including precisely defining relaxed and approximate equivariance and refining any content that may have caused reader misunderstandings. We have also elaborated on how we have leveraged these concepts to propose and implement the relaxed rotation-equivariant filter.\"}",
"{\"title\": \"Welcome to discussion!\", \"comment\": \"Dear Reviewers:\\n\\nThank you for your review! We have uploaded the latest revised version, with blue markings indicating new or modified content. We look forward to your valuable discussion.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"title\": \"Author Rebuttal to Reviewer L1nv: Part 1\", \"comment\": \"**Thank you for taking the valuable time to review the manuscript. The following may address your concerns.**\\n\\n> **Weakness 1:**\", \"dataset_limitation\": \"The selected datasets, PASCAL VOC and MS COCO 2017, do not emphasize rotation characteristics, which reduces the impact and relevance of the experimental results. To better highlight the effects of SRE and RRE modeling, rotation-specific object detection datasets should be used.\\n\\n**Answer 1:** The selected datasets, PASCAL VOC and MS COCO 2017, are widely recognized and established benchmarks in the field of object detection. Although these datasets do not specifically emphasize rotation characteristics, **they present a more challenging scenario as real-world data often encounters noise or occlusions.** \\n\\nIn fact, our work focuses on exploring rotation-equivariant properties on natural image datasets, in order to propose a universal object detection algorithm (e.g., YOLO series, DETR series) that utilizes these properties. Using rotation-equivariance can learn the intrinsic rotational symmetry of objects, in order to obtain better representations and improve the performance and generalization ability of the model. Therefore, we don't need the dataset to have rotational features.\\n\\nTo further address your concerns, we have conducted experiments on rotated datasets.\\nPlease refer to the classification experiment on the ROT-MNIST dataset in **Appendix A.5** of the paper.\\n\\nOur designed ROT-MNIST dataset differs from the standard Rotated MNIST, as it is specifically crafted to assess the robustness of our R2Net. We manipulated the training set by randomly rotating 60,000 images by 0, 90, 180, and 270 degrees, while keeping 10,000 images unaltered in the test set to evaluate the model's performance under rotation. \\n\\nWe compared the training accuracy of YOLOv8-N-CLS and our R2Net-N on the ROT-MNIST dataset. Our R2Net-N demonstrates superior stability and achieves higher accuracy compared to YOLOv8-N-CLS.\\nThis experiment also demonstrates that our approach can effectively model rotated datasets.\\n\\n\\n> **Weakness 2:**\", \"insufficient_baseline_comparison\": \"it would be beneficial to include comparisons with established models in rotation object detection, such as ReDet and FRED, to strengthen the evaluation and provide more convincing evidence.\\n\\n**Answer 2:**\\nAlthough these approaches involve the concept of rotational equivariance, they still differ significantly in research objectives and tasks, and are different from our 2D detection tasks.\\n\\nOur R2Det's task is general object detection in natural images, where the output consists of vertical or horizontal bounding boxes.\\n\\nThe reason for our task involving rotation-equivariance is to explore the inherent rotation-equivariant (symmetric) properties of the objects themselves, thereby enhancing model rotational feature learning.\\n\\nHowever, ReDet and FRED both focus on a specific task: oriented object detection in the aerial image field, where the output consists of rotated bounding boxes. \\nReDet and FRED focus on rotation-equivariance to better extract rotation-equivariant features, improving the accuracy of orientation prediction.\\n\\nTherefore, while R2Det and ReDet/FRED both involve the concept of rotation-equivariance, they focus on different tasks. \\n\\nIt is worth noting that, to our knowledge, we are the first to explore (relaxed or strict) rotation-equivariance in the context of natural image object detection tasks.\"}",
"{\"comment\": \"Dear Reviewer AppE,\\n\\nFirstly, thank you very much for your suggestions on the revision of our paper, which has contributed to its high quality.\\n\\nSecondly, regarding the concern of R2Det's inference speed, we hope the following explanation can solve your confusion.\\n\\nIndeed, as you mentioned, \\\"custom\\\" operations are causing R2Det's inference speed to be slower.\", \"we_test_the_speed_of_r2det_n_compared_with_yolov8_n_on_an_rtx4090_and_the_results_are_as_follows\": \"**Table 1:** Inference of R2Det-N and YOLOv8-N per image on an RTX4090.\\n|Model|Inference time|\\n|-|-|\\n|YOLOv8-N|0.9ms|\\n|R2Det-N|2.1ms|\\n\\nThe main reason for the slow inference of the model is that the specific **affine transformation** currently does not have a dedicated optimization algorithm.\\n\\nFor this purpose, we have modified a dedicated cuDNN operator into Efficient R2GConv on GitHub [1], which has the following model inference speed.\\n\\n**Table 1:** Running Time of Efficient R2GConv with input channel 512, output\\nchannel 512, height 640, width 640, kernel size 3, stride 1, and\\npadding 1 on Pytorch and dedicated cuDNN on an RTX4090.\\n|Type|Inference Time|\\n|-|-|\\n|Efficient R2GConv with Pytorch| 0.1431ms|\\n|Efficient R2GConv with a dedicated cuDNN|0.0903ms (-0.0528ms, **36.9\\\\% $\\\\downarrow$)**|\\n\\nIn fact, we believe that through specialized operators, inference time can be greatly optimized. **This seems to be an engineering problem that will be solved in the future**. \\n\\nThirdly, the reason why Efficient R2GConv has lower FLOPs is mainly due to the extensive use of Point-wise and Depth-wise operators. However, Pytorch has low efficiency for the Depth-wise operator, **which is also a direction for improving speed in the future**. \\n\\nAdditionally, you can refer to **Appendix A.9** for theoretical calculations of the parameter of our Efficient R2GConv.\\n\\nFinally, thank you for your diligent review again. If you have any questions, we are still happy to answer them.\\n\\nBest regards,\\n\\nAuthors.\\n\\n[1] https://github.com/diningeachox/G-CNN\"}"
]
} |
EUe0yA2pAw | On Exact Bit-level Reversible Transformers Without Changing Architectures | [
"Guoqiang Zhang",
"JP Lewis",
"W. Bastiaan Kleijn"
] | Various reversible deep neural networks (DNN) models have been proposed to reduce memory consumption in the training process. However, almost all existing reversible DNNs either require special non-standard architectures or are constructed by modifying existing DNN architectures considerably to enable reversibility. In this work we present the BDIA-transformer, which is an exact bit-level reversible transformer that uses an unchanged standard architecture for inference. The basic idea is to first treat each transformer block as the Euler integration approximation for solving an ordinary differential equation (ODE) and then incorporate the technique of bidirectional integration approximation (BDIA) (originally designed for diffusion inversion) into the neural architecture, together with activation quantization to make it exactly bit-level reversible. In the training process, we let a hyper-parameter $\gamma$ in BDIA-transformer randomly take one of the two values $\{0.5, -0.5\}$ per training sample per transformer block for averaging every two consecutive integration approximations. As a result, BDIA-transformer can be viewed as training an ensemble of ODE solvers parameterized by a set of binary random variables, which regularizes the model and results in improved validation accuracy. Lightweight side information per transformer block is required to be stored in the forward process to account for binary quantization loss to enable exact bit-level reversibility. In the inference procedure, the expectation $\mathbb{E}(\gamma)=0$ is taken to make the resulting architectures of BDIA-transformer identical to transformers up to activation quantization. Our experiments in both image classification and language translation show that BDIA-transformers outperform their conventional counterparts significantly in terms of validation performance due to the regularization effect of the set of $\gamma$ random variables while also requiring considerably less training memory. | [
"transformer",
"ViT",
"BDIA",
"reversibility",
"ODE solvers"
] | https://openreview.net/pdf?id=EUe0yA2pAw | https://openreview.net/forum?id=EUe0yA2pAw | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yUDMBjJYxi",
"uVoWMrF6lT",
"tlTOQ8BwBk",
"cPCC4hrnwF",
"P6k4uoFsP3",
"DuSxs1g80C",
"DaUVtv7ebb"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1731034809462,
1731119171126,
1731289632475,
1730994900651,
1731024483086,
1732269619927,
1729709883998
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12118/Reviewer_g9w5"
],
[
"ICLR.cc/2025/Conference/Submission12118/Reviewer_ovSj"
],
[
"ICLR.cc/2025/Conference/Submission12118/Reviewer_6qsG"
],
[
"ICLR.cc/2025/Conference/Submission12118/Reviewer_iNrX"
],
[
"ICLR.cc/2025/Conference/Submission12118/Reviewer_Jk9h"
],
[
"ICLR.cc/2025/Conference/Submission12118/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12118/Reviewer_r442"
]
],
"structured_content_str": [
"{\"summary\": \"The paper introduces BDIA-transformers, a novel approach to reversible transformers designed to reduce memory consumption during training without altering the architecture during inference. The approach leverages the Bidirectional Integration Approximation (BDIA) technique, which treats each transformer block as an Euler integration approximation for solving ordinary differential equations (ODEs). The experimental results show that BDIA-transformers outperform standard transformers in image classification and language translation tasks while reducing training memory requirements. For text prediction, BDIA-GPT2 prevents overfitting when trained on small datasets.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses the \\u201cmemory wall\\u201d challenge, a critical issue in deep learning in which training large models requires extensive memory to store intermediate activations alongside the model itself. Authors explore reversible deep learning to tackle this issue. Reversible architectures allow backpropagation without storing or with minimal storage of intermediate activations, thus offering significant memory savings. The authors propose a novel technique by treating each transformer block as an Ordinary Differential Equation (ODE) solver and applying the Bidirectional Integration Approximation (BDIA) to these blocks.\", \"Experimental results show that BDIA-transformers improve performance across various tasks, including image classification and language translation. Authors claim that performance gains and memory efficiency are gained by implementing this method on Vision Transformers.\", \"This approach has the potential to enable the training of larger, more complex models when memory is the bottleneck, but we have very strong computational power.\"], \"weaknesses\": [\"The paper is hard to follow, especially in the preliminary and method sections. It doesn\\u2019t clearly explain key ideas, like why transformers can be viewed as ODE solvers or how BDIA transformation is applied, which makes it confusing.\", \"The related work section mainly lists past studies without explaining how the field has developed or why this approach is needed. This makes it hard to see where this paper fits in with previous work.\", \"Some arguments supporting the method are vague. For instance, in the subsection \\\"on similarity to dropout technique\\\".\", \"Overall, a clearer, simpler structure and explanations would make the paper easier to understand, especially for readers unfamiliar with reversible deep learning.\", \"The paper attributes BDIA-transformer's improved performance to the regularization effect of random variables, which is said to work similarly to dropout. However, it\\u2019s not clear if this is the only reason for the performance gains. The authors haven\\u2019t compared BDIA against other standard regularization methods on the baseline transformer, leaving open the chance that similar improvements could be achieved without the added complexity of BDIA.\", \"For language models, benchmarks and perplexity provide more informative insights into model quality. Could the authors test their methods against GPT-2 using additional metrics?\"], \"questions\": [\"How does the extra computation time for a BDIA transformer compare to that of a regular transformer? In what situations does this additional overhead make sense? Could the authors also report the wall-clock performance of their method?\", \"Could the authors explain why they view a sequence of transformer blocks as a single ODE solver? This idea seems to fit better with diffusion models, as proposed by Zhang et al. 2023, but it\\u2019s not as clear for transformers.\", \"How well does the BDIA-transformer handle larger datasets or bigger models? Would the memory and speed trade-offs still hold up in these cases?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents BDIA-transformer, a novel approach to creating reversible transformers that maintain their original architecture during inference. The authors incorporate the recently proposed bidirectional integration approximation (BDIA) technique, originally proposed for diffusion inversion, into the transformer architecture training process.\\n\\nDuring the training process, a hyper-parameter $\\\\gamma \\\\in \\\\{0.5, -0.5\\\\}$ is randomly selected for each sample and transformer block to average consecutive integrations. This approach effectively trains an ensemble of ODE solvers. At inference time, the expectation $E(\\\\gamma) = 0$ is used, reducing the model to a standard transformer with quantized activations.\\n\\nThe authors observed that the BDIA update expression is only theoretically reversible when using floating-point arithmetic, leading to error accumulation, especially for deep networks. To address this issue and enable lossless online backpropagation, they apply activation quantization to achieve exact bit-level reversibility.\\n\\nExperimental results on various tasks, including image classification, language translation, and text prediction, demonstrate that BDIA-transformer (1) uses significantly lower overall memory during training (compared to ViT) and (2) acts as a regularizer, improving validation accuracy (over both RevViT and ViT) and reducing overfitting on small datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The background on the BDIA technique and its application in diffusion inversion is well-written and provides a natural motivation for its application in transformers.\", \"The authors provide a clear motivation for their work and the necessity of reversible transformer architectures.\", \"The application of this technique to transformers is novel and well-motivated.\", \"More specifically, the connection between diffusion inversion and neural ODEs is an interesting one and provides a unique perspective on the problem.\", \"Experimental results demonstrate the effectiveness of the proposed BDIA-transformer on various tasks.\"], \"weaknesses\": [\"My main concerns with the paper are with the presentation and clarity of the methodology. Specifically:\", \"The paper begins (both in the abstract and introduction) with heavily references to the $\\\\gamma$. However, the significance or meaning of this parameter is not immediately clear to the reader.\", \"The choices for specific values of $\\\\gamma$ are not well-motivated. Further, for each $\\\\gamma$ choice, seemingly different amounts of side-information are required, but these details are also not explained well.\", \"Similar to the first point, the reason for the existence of the activation quantization is not clear until much later in the paper. This is also something that should be explained much earlier in the paper (e.g., in the introduction).\"], \"questions\": [\"Can you provide more intuition for the choice of $\\\\gamma$ values? Why are these values chosen, and what do they represent? Further, do you have any theoretical insights into the choice of these values?\", \"Can you clarify, more explicitly, how you arrived at 1 bit for $\\\\gamma \\\\in \\\\{0.5, -0.5\\\\}$ and 2 bits for $\\\\gamma \\\\in \\\\{0.25, -0.25\\\\}$? How does this relate to the $\\\\mathbf{s}_{k-1}[m]$ variable from equation (20)?\", \"In section 5.1, you mention hat you're training with $K=6$ transformer blocks. Later, when talking about the memory overhead, however, you denote the side information as $\\\\{s_k\\\\}_{k=0}^3$. Can you clarify this? Why is the side information only stored for the first 4 blocks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes BDIA-transformer, a novel type of reversible transformer based on the bidirectional integration approximation (BDIA). A random hyperparameter $\\\\gamma$ is introduced per transformer block per training sample to regularize the models. The paper further performs activation quantization to allow for exact bit-level reversibility of BDIA-transformers. Empirical results show that the BDIA technique outperforms baseline transformers and reduces training memory for image classification and language translation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Clarity: The authors clearly define the problem, related work, and their proposed method with well-written equations.\", \"Significance: The paper proposes a unique approach to achieve bit-level reversibility in transformers without architectural changes, leveraging techniques from ODE solvers and quantization. The introduction of the binary random variables also serves as a good regularization strategy.\"], \"weaknesses\": [\"Lack of experiments: The paper compares BDIA-transformers with standard transformers and RevViT. It would benefit from a broader evaluation against other reversible architectures or quantization methods.\", \"The reliance on storing lightweight side information for exact reversibility might reduce its practical applicability as depth increases. It would be nice if this trade-off between memory efficiency and information storage is more analyzed.\"], \"questions\": [\"What are the failure modes of BDIA-transformers? In which scenarios can the BDIA-transformers underperform compared to other transformers?\", \"Could you also report the comparison of BDIA-transformers and other architectures in terms of training time and computational cost?\", \"Minor: In Figure 3, the y-axis label should be \\u201ctraining loss\\u201d\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes a novel type of reversible transformers with the aim to reduce the memory during training. To this end, this work treats each transformer block as the Euler integration approximation in a manner similar to Neural ODEs. There are two main contributions. Firstly, the authors borrow a technique from recent works on diffusion inversion for round-trip image editing, which involves bidirectional integration approximation. This approximation introduces a hyperparameter $\\\\gamma$. The authors propose selecting $\\\\gamma$ randomly either -0.5 or 0.5 for each training sample and training block. Consequently, the training can be viewed as an ensemble of ODE solvers. This regularization led to observed improvements on validation data. Secondly, to ensure reversibility, the authors propose performing activation quantization while storing side information. This approach is validated on small datasets involving image classification, machine translation, and language modeling.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is generally well-written.\", \"The paper addresses an important and timely problem: reducing the memory consumption during the training of transformers, which is particularly relevant given the current widespread use of transformer models.\", \"The proposed idea is compelling as it retains the original architecture of transformers. This stands in contrast to existing approaches that typically involve modifications to the transformer architecture.\"], \"weaknesses\": [\"The reproducibility is low as there is no source codes or pseudo codes or detailed algorithms.\", \"Although the paper includes thorough mathematical derivations, these seem to be more aligned with concepts from residual networks (ResNets) rather than focusing specifically on transformers. Notably, in equation (4), the authors treat the combined attention and feed-forward network modules as a residual term, resulting in derivations similar to those found in NeuralODEs with ResNets. However, these modules are key differentiators in transformer architectures compared to other models.\", \"The experiments mainly consider small datasets or relies on toy examples for transformers.\"], \"questions\": [\"In figure 1, and in line 287, how did the authors integrate $\\\\gamma$ into standard transformers?\", \"In figure 2, the authors should the reconstruction errors w.r.t. the proposed method using quantization and side information. Otherwise, it is not clear the effectiveness of these tricks.\", \"The authors should compare experimentally the proposed methods against vanilla transformers applied with dropout.\", \"What dataset did the authors use in the machine translation experiments.\", \"Although the authors show the memory gains, they should show the convergences in terms of wall-clock time to see better the computational complexity introduced by the proposed method.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The presented paper proposes a new training algorithm for transformers enabling reversibility up to a fixed precision level. While the paper itself focuses on transformers, the method seems to be applicable to any residual architecture. The method itself enables exact reversibility up to a given precision level, without architecture modification, and is thus broadly applicable. The authors introduces a regularizing parameter $\\\\gamma$ which substantially alleviates overfitting issues in Transformers.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and the algorithm is pretty easy to follow.\\n2. The experiments, while being rather small scale, do demonstrate a surprising regularization effect, able to tackle the overfitting issue of transformers.\", \"weaknesses\": \"1. The parameters $\\\\gamma_k$ seems to already exist in the original BDIA paper and it is thus not clear what is the novelty of this paper on the matter.\\n2. The exact reversibility seems to require the quantization step but there is no ablation on the precision level $l$.\\n3. The small scale experiments does demonstrate a surprising regularization effect. However, the paper seems to seek reversibility, which is a feature usually used to scale up the model size given a fixed compute setup. Therefore, it is quite strange to focus on small scale experiments.\", \"questions\": \"Could the authors elaborate on the three weaknesses ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank all the reviewers for their appreciation of the novelty of our new approach for designing reversible transformers that preserves architectures in the inference stage. Also we are happy that the reviewers notice the significant performance improvement of BDIA-transformer over transformer for the tested tasks on small-scale image classification and language translation.\\n\\nDue to limited time to do large scale experiments in the rebuttal period, we decide to withdraw the paper and address the comments for a future submission.\"}",
"{\"summary\": \"The article derives a new method to make a transformer exactly reversible, based on a method initially dedicated to diffusion models and on the quantization of the activations. The proposed method also seeks to regularize the training of the model. Experiments are conducted on a variety of benchmarks with different transformer-based models, that is, vision-transformer, vanilla transformer and GPT-2. The proposed method achieves SOTA results on a vision task, while also preventing overfitting.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written, the ideas are clearly presented\", \"Several insights on how the method works and relates to previous work are provided such as its similarity to dropout\", \"The experiments effectively support the claims made in the article, such as the regularization effect of the method\"], \"weaknesses\": [\"The related work section is very short and surprisingly focuses on quantization, while quantization appears more in this work as a convenient trick to make information representation simpler rather than the primary topic of the article. Notably, the presentation of other works on reversible networks is missing (in this section)\", \"The whole section on quantization (section 4.3) appears a bit messy. For instance, there is a parameter $l$ which is never really discussed, making it seem quite arbitrary. Similarly, the information storage vector $s_k$ is defined in a technical manner, and the section would benefit from providing more intuitive insights into its function.\"], \"questions\": [\"There is a tipo at line 169 (written $\\\\\\\\{-1/2, -1/2\\\\\\\\}$ instead of $\\\\\\\\{-1/2,1/2\\\\\\\\}$)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EUSkm2sVJ6 | How much of my dataset did you use? Quantitative Data Usage Inference in Machine Learning | [
"Yao Tong",
"Jiayuan Ye",
"Sajjad Zarifzadeh",
"Reza Shokri"
] | How much of my data was used to train a machine learning model? This is a critical question for data owners assessing the risk of unauthorized usage of their data to train models. However, previous work mistakenly treats this as a binary problem—inferring whether all-or-none or any-or-none of the data was used—which is fragile when faced with real, non-binary data usage risks. To address this, we propose a fine-grained analysis called Dataset Usage Cardinality Inference (DUCI), which estimates the exact proportion of data used. Our algorithm, leveraging debiased membership guesses, matches the performance of the optimal MLE approach (with a maximum error <0.1) but with significantly lower (e.g., $300 \times$ less) computational cost. | [
"Machine Learning",
"Privacy",
"Dataset Usage Inference",
"Dataset Ownership",
"Membership Inference Attack",
"Dataset Copyright"
] | Accept (Oral) | https://openreview.net/pdf?id=EUSkm2sVJ6 | https://openreview.net/forum?id=EUSkm2sVJ6 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yhCCMW45bo",
"xwwMCCq5ia",
"wIyVYYjnMn",
"orH4nzRvbE",
"njOTg74y3B",
"m116kRlA8w",
"kGC9QboJmi",
"isi8Y0MoH3",
"fBQ05a967J",
"doSUbPHjmd",
"dm8rmr88sZ",
"a4cAoFhPwo",
"XsQiqBZf2o",
"XrQgaTMqqJ",
"V1UEk1xlSd",
"RDNzYYK230",
"QX1bZEo6ib",
"NmWvOI1ECs",
"IBnMLnszTX",
"GOMV9lzCda",
"DARtpBdRGs",
"CSC0BNDTDx",
"C90yBqImWH",
"7k2pfoFzY3",
"5bscTKR8JC",
"3H8nvDJW6h",
"13wZ0wHtPN"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1732292561305,
1730439560469,
1732291341507,
1732291491750,
1732649795890,
1732287965867,
1732288527722,
1732763985098,
1730670609273,
1732292783779,
1732395756475,
1730808010256,
1732292457492,
1732445210579,
1732289122900,
1732290178644,
1737523707601,
1732287593532,
1732290243075,
1734751837673,
1730701282488,
1732621804203,
1732745263945,
1732610779631,
1732764333884,
1730698277962,
1732286238503
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Reviewer_BehN"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Reviewer_THsi"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Reviewer_Tzw7"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Reviewer_BehN"
],
[
"ICLR.cc/2025/Conference/Submission5454/Reviewer_mJ1c"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Area_Chair_A8jB"
],
[
"ICLR.cc/2025/Conference/Submission5454/Reviewer_THsi"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Reviewer_H5r7"
],
[
"ICLR.cc/2025/Conference/Submission5454/Reviewer_mJ1c"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5454/Reviewer_H5r7"
],
[
"ICLR.cc/2025/Conference/Submission5454/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"> 2. If TPR=FPR, how does this affect the debiasing results?\\n\\nGood question. It would result in a zero denominator in Equation (6), so we actually assume $TPR \\\\neq FPR$ in Line 205. As discussed in Lines 206\\u2013207, this assumption should be reasonable and achievable for the following reasons:\\n\\n1. A membership identification method designed to discriminate between member and non-member data points, rather than acting randomly, is unlikely to produce $TPR = FPR$ all the time, especially when the dataset is not random.\\n\\n2. Since $TPR$ and $FPR$ can be adjusted by varying the threshold, it should always be possible to select a threshold such that $TPR \\\\neq FPR$.\"}",
"{\"summary\": \"This paper proposes an algorithm framework that can figure out whether data points in the data sets are used to train a model. This problem seems interesting, and there are some works on a similar problem called membership inference. The authors propose that instead of predicting each member by {0,1}, a better way is to predict a probability in [0,1] and design an algorithm for DUCI based on this idea. The authors also did many experiments comparing their method with several baselines on errors and confidence intervals.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper studies an interesting problem and proposes a new algorithm to predict a probability in [0,1] instead of {0,1} for the problem. Though the authors proposed a specific algorithm, the technique here can be used for any algorithm that serves the purpose of membership query. This paper is well-written and easy to read. The authors also did experiments thoroughly by comparing with baselines and on different datasets.\", \"weaknesses\": \"Though this paper has some novelty, the technique here seems to be quite simple and straightforward. It is hard for me to say this paper has good contribution confidently.\", \"questions\": \"The authors mention two possible improvements in Appendix G. I think this paper would be really strong if the authors could be more concrete on how to apply one of the ideas to their algorithm and show some results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"To Reviewer H5r7\", \"comment\": \"Thank you for your important and interesting questions. We believe all our discussion will offer valuable insights for future work.\\n\\n> 1. My main concern is that this method requires known training set for estimating FPR and TPR. In practice, the train set is usually private (see following reference Zhang at el 2024).\\n\\nThanks for raising this intriguing question. We are aware of the challenges in designing a non-member scenario for evaluating training data proof, as discussed by Zhang et al., 2024. Three key issues in the current mainstream design of evaluation that related to our task are as follows:\\n\\n1. **Distribution shifts**: Non-member data collected based on the model's training cutoff date can introduce distribution shifts, making the evaluation artificially easier. \\n2. **Member uncertainty**: Not all data released before the cutoff are guaranteed to be members since the cutoff date provided by model developers may be inaccurate. \\n3. **Causal relationships**: Using hold-out counterfactual data as non-members can introduce causal dependencies, as the model may have been trained on a related, recently released version of the data.\\n\\nTo ensure reliable evaluation in our book copyright infringement case study, we fine-tune a GPT-2 model on a recently collected (by cutoff date) dataset, ensuring that the original training set has no causal relationship with the new data. Instead of directly treating the entire dataset as non-member data, we fine-tune the model on different proportion of data to create members and non-members. This ensures there is no distribution shift between members and non-members that could be exploited. Our focus here is to develop effective methods for the DUCI problem and demonstrate their superiority over baselines under a fairly designed evaluation, which we achieved.\\n\\n***Regarding the concern about evaluation on large production models:*** we agree that closed-source training pipelines of production models pose problems. However, this issue is independent of method design, and all training data proof evaluations face the same problems. This calls for more open-source (like Pythia [1]) evaluation benchmarks on different model architectures and datasets.\\n\\nLastly, regarding the TPR/FPR values used for debiasing in our method: we do not assume access to the target model's training set. Instead, TPR and FPR are estimated on our private dataset (known to the dataset owner) using a reference model. This reference model could be finetuned based on a checkpoint released before the dataset\\u2019s creation or even a smaller model with a different architecture and trained on different data [2,3].\\n\\n*[1] Biderman, S., Schoelkopf, H., Anthony, Q. G., Bradley, H., O\\u2019Brien, K., Hallahan, E., ... & Van Der Wal, O. (2023, July). Pythia: A suite for analyzing large language models across training and scaling.*\\n\\n*[2] Duan, M., Suri, A., Mireshghallah, N., Min, S., Shi, W., Zettlemoyer, L., ... & Hajishirzi, H. (2024). Do membership inference attacks work on large language models?*\\n\\n*[3] Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Kather- ine Lee, Adam Roberts, Tom B Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting Training Data from Large Language Models.*\\n\\n> 2. In Figure 3, the length of confidence interval seems to be large compared to the absolute error. Is this true?\\n\\nThanks for pointing out the potential confusion; we will improve the clarity of the confidence interval (CI) explanation in our paper. First, **the small value of MAE compared to the length of the 95\\\\% CI illustrates that our predictions are highly concentrated around the true value**, as if the ***unbiasedness of $\\\\hat{p}$ is empirically achieved, MAE equals to the standard deviation $\\\\sigma$ of $\\\\hat{p}$***. Second, given that the ***95\\\\% confidence interval is calculated according to Lyapunov CLT*** ($\\\\hat{p}$ follows an approximately Gaussian distribution), ***its length is theoretically equal to $(\\\\mu + 2\\\\sigma) - (\\\\mu - 2\\\\sigma) = 4\\\\sigma$, which is approximately four times the $sigma)$ (MAE)***.\\n\\nAs observed, the maximum length of the 95\\\\% confidence interval (CI) is approximately 0.12, while the maximum absolute error using a single reference model is 0.027 in Table 1\\u2014roughly one-quarter of the CI length. This shows that **the CI length is quite small and effectively demonstrates: (1) the empirical unbiasedness of our estimator and (2) the high concentration of our predictions around the true value.**\"}",
"{\"comment\": \"> 3. Would this debiasing method downgrade the test power?\\n\\nIt would not. If \\\"test power\\\" refers to dataset cardinality inference, our experiments in Tables 1\\u20134 clearly demonstrate that, without the debiasing process, directly aggregating MIA predictions results in significant error. The debiasing process substantially improves the test power for dataset cardinality inference.\\n\\nIf \\\"test power\\\" refers to membership inference (MI), the debiasing method does not affect it, as it serves as a post-processing step for membership predictions. That is, if MI is performed by comparing scores to a threshold, the same post-processing can be applied to adjust the threshold, ensuring no degradation in MIA performance. Moreover, intuitively, in scenarios where a single threshold is applied across all points, directly debiasing individual scores acts as a form of calibration, enabling consistent thresholding and potentially improving test power. However, since our primary goal is not to design best MIA, we leave such potential application for future work. In our design, the debiasing process is independent of the chosen MIA method, and the membership predictions remain unchanged within our framework.\\n\\n> 4. Is there any connection with auditing differential privacy?\\n\\nYes! **DP auditing via DUCI is theoretically feasible. When the training algorithm satisfies differential privacy ($\\\\varepsilon$-DP), provable upper bounds for the error of DUCI can be established via standard packing argument [1].** Loosely speaking, let the ground truth membership probability for record $i$ in the training dataset be $p_i$. By definition, datasets sampled under probabilities $(p_1, \\\\cdots, p_n)$ and $(p_1 \\\\pm \\\\frac{1}{n\\\\varepsilon}, \\\\cdots, p_n \\\\pm \\\\frac{1}{n\\\\varepsilon})$ differ by at most $\\\\frac{1}{\\\\varepsilon}$ records in expectation. \\nThus, under an $\\\\varepsilon$-DP training algorithm, with constant probability, any adversary cannot distinguish between datasets sampled with $(p_1, \\\\cdots, p_n)$ and $(p_1 \\\\pm \\\\frac{1}{n\\\\varepsilon}, \\\\cdots, p_n \\\\pm \\\\frac{1}{n\\\\varepsilon})$. As a result, this introduces an inevitable MAE of $\\\\frac{1}{\\\\varepsilon n}$ when estimating the dataset usage $\\\\frac{1}{n}\\\\sum_i p_i$ under an $\\\\varepsilon$-DP training algorithm. Thus, DUCI shows potential for use in DP auditing. However, it remains an interesting open question regarding the connnections between DP auditing via DUCI versus prototypical DP auditing experiment (e.g., via repeated retraining runs).\\n\\n*[1] Hardt, M., & Talwar, K. (2010, June). On the geometry of differential privacy.*\"}",
"{\"comment\": \"Thanks for answering my questions and addressing my suggestions.\\nI've updated my score.\"}",
"{\"comment\": \"> Poor results around p=0 in Table 4\\n\\nThis is a really important observation! In Table 4 (the book copyright scenario), all methods exhibit larger absolute errors when $ p = 0 $ with our method showing an error of around 0.1, compared to MIA Score with an error of 0.5. However, **this is not a sign that our method is unsuitable for answering the question of whether a dataset has been used. On the contrary, it highlights the practical motivation for dataset cardinality inference (i.e., different use cases require different thresholds, as specified in the U.S. Copyright Act).**\\n\\nAs the error decreases with increasing $ p $, and when $ p = 1 $, the error of our method is nearly zero (i.e., MAE = 0.01), even the worst baseline's error reduces to 0.175. This demonstrates that our method performs well when the dataset is used, with only minor confusion when the dataset is not used. **This trend is not observed in the image dataset** as shown in the table below. We believe the explanation for this behavior lies in the nature of language data: **frequently used sentences or phrases with high similarity appear across books, making it nearly impossible for two books to have no phrase-level overlap.** This is why, under the law, a low fraction of similarity in certain content types\\u2014such as axioms or public knowledge\\u2014is considered fair use.\\n\\n**Table:** *(Image Data)* Mean Absolute Error (MAE) $\\\\mathbb{E}[|\\\\hat{p}_i - p|]$ for all methods under different proportions $p$.\\n| $p$ | MIA Guess | MIA Score | Our Method |\\n|---------|-----------|-----------|----------------|\\n| 0.0 | 0.3025 | 0.2925 | **0.0208** |\\n| 0.2 | 0.2375 | 0.1788 | **0.0214** |\\n| 0.4 | 0.1635 | 0.0624 | **0.0261** |\\n| 0.6 | 0.0937 | 0.0535 | **0.0223** |\\n| 0.8 | 0.0407 | 0.1732 | **0.0165** |\\n| 1.0 | 0.0526 | 0.2905 | **0.0135** |\\n|---------|-----------|-----------|----------------|\\n|$\\\\max_p \\\\text{MAE}$ | 0.3025 | 0.2925 | **0.0261** |\"}",
"{\"comment\": \"***Regarding the Concern About the Performance of Our Method in Determining Dataset Usage***\\n\\nAs discussed in Lines 35\\u201346 and shown in Figure 1, while methods restricted to binary predictions under an all-or-none dataset usage scenario cannot ensure consistent predictions for partial utilization, **a method providing fine-grained estimates can naturally be reduced to solve the binary problem.**\\n\\nTo illustrate this, consider the null hypothesis ($H_0: s = s' + \\\\tau$, i.e., the target model is not trained on the protected dataset) used in prior binary dataset inference literature. For different contexts, $s$ and $s'$ can take the following forms:\\n1. **Dataset Inference [1]**: $s$ and $s'$ represent the distances to the decision boundary measured on a private dataset and a population dataset, respectively. This hypothesis assumes that if the model was trained on the private dataset, the distance measured on the private dataset would exceed that on the public dataset.\\n2. **LLM Dataset Inference [2]**: $s$ and $s'$ are the weighted aggregations of 52 MIA scores over the private and population datasets. This hypothesis assumes that merged MIA scores would be significantly higher for members than non-members over enough samples.\\n3. **Backdoor Watermarks [3]**: $s$ and $s'$ are the confidence score on the target label given backdoored inputs and given clean inputs. This hypothesis assumes that a model trained on a poisoned dataset (if successfully backdoored) will assign higher confidence to the target class when triggered, but not for clean inputs.\\n\\nFor DUCI, a straightforward simplification to the dataset inference problem can be made by set $s = \\\\hat{p}$, $s' = 0$, and $\\\\tau$ serves as a threshold, which may vary depending on the data type. Below, we report the performance of our method adapted for solving the binary dataset inference problem.\\n\\n**Table:** Comparison of p-values between DUCI and binary dataset usage algorithms for determining whether a dataset $X$ (size 500) has been used. The complete training dataset of the target model has a size of 25,000. For p-values, a smaller value for **Dataset Used** is better, while a larger value for **Dataset Not Used** is better.\\n\\n| **Methods** | **p-value (Dataset Used \\u2193)** | **p-value (Dataset Not Used \\u2191)** |\\n|---------------------------------------|------------------------------|-----------------------------------|\\n| Backdoor Watermark (poison 30% of $X$) | $7.10 \\\\times 10^{-5}$ | 0.334 |\\n| Backdoor Watermark (poison 100% of $X$) | $\\\\mathbf{6.18 \\\\times 10^{-54}}$ | **1.000** |\\n| Dataset Inference | $7.27 \\\\times 10^{-10}$ | 0.937 |\\n| Ours | $\\\\mathbf{3.15 \\\\times 10^{-51}}$ | **1.000** |\\n\\nConsistent with the performance shown in Figure 1, all methods can perfectly solve the binary dataset usage problem when the significance level is set to common thresholds such as 0.05 or 0.01. **For backdoor watermark methods, the main challenge lies in the successful injection of backdoor when the dataset is not fully sampled or when the protected dataset's relative size is small. This will significantly impact performance, e.g., poisoning even 30% of X performs poorly when $|X|$ is small.\\nOur method performs exceptionally well, achieving comparable results to backdoor watermarking when the entire dataset is poisoned.** In principle, the performance of Dataset Inference should be close to our method; however, the slight drop in performance may be attributed to the choice of signal, as the loss-based score is less distinguishable in distribution than the likelihood ratio-based score. We did not compare with [2] as combining multiple MIAs is orthogonal to our approach. Our method can debias any number of MIAs using the same reference models without retraining, with combination possible after debiasing if needed.\\n\\nFinally, it is important to note that directly comparing the reported error of DUCI at $p = 0$ and $p = 1$ to that of dataset inference is inherently unfair. DUCI predicts a continuous value, whereas dataset inference is a simple binary classification task.\\n\\n*[1] Maini, P., Yaghini, M., & Papernot, N. (2021). Dataset inference: Ownership resolution in machine learning.*\\n\\n*[2] Maini, P., Jia, H., Papernot, N., & Dziedzic, A. (2024). LLM Dataset Inference: Did you train on my dataset?*\\n\\n*[3] Li, Y., Zhu, M., Yang, X., Jiang, Y., Wei, T., & Xia, S. T. (2023). Black-box dataset ownership verification via backdoor watermarking.*\"}",
"{\"comment\": \"Thank you again for your detailed and valuable comments, which have been very useful in improving the quality of our paper. We greatly appreciate your time and effort in reviewing our work!\"}",
"{\"summary\": \"Given a dataset, this paper presents an algorithm (DUCI) which estimates the proportion of that dataset used in the training of a model. The algorithm estimates the false positive rate (FPR) and true positive rate (TPR) of the membership inference guess across the entire dataset to avoid the accumulation of errors that occurs when estimating the FPR and TPR of each individual in the dataset. They conduct experiments to compare the performance of their algorithm (DUCI) against traditional membership inference baselines and an idealized, computationally inefficient MLE baseline. They also analyze the performance of DUCI and membership inference baselines under special sampling conditions and varying dataset sizes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The paper is well motivated and addresses a gap in the literature by taking a fine grained approach to the data usage problem.\\n\\nThe proposed approach (DUCI) is significantly more computationally efficient compared to previous approaches.\", \"weaknesses\": \"I don't see any major weaknesses. It would be nice to show a comparison between DUCI and SOTA \\u201cbinary\\u201d data usage algorithms for the specific case for when p=1 and p=0 to demonstrate that DUCI still has comparable performance to \\u201cbinary\\u201d data usage algorithms in these specific cases.\", \"questions\": \"If TPR=FPR, how does this affect the debiasing results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"To Reviewer BehN\", \"comment\": \"> 1. The authors mention two possible improvements in Appendix G. I think this paper would be really strong if the authors could be more concrete on how to apply one of the ideas to their algorithm and show some results.\\n\\nWe sincerely appreciate your efforts in reviewing the improved methods in the Appendix! In response to your question, we have updated the details of the second-order debiasing methods in Appendix H.2 in the [revised paper](https://openreview.net/pdf?id=EUSkm2sVJ6). However, we maintain the simple first-order dataset-level debiasing method in the main paper, ***as one of our primary focus is to show that the key to solving DUCI problem (via existing MI techniques) lies in the debiasing concept itself, independent of the specific debiasing design. We believe the conciseness and straightforwardness of main methods are essential for effectively conveying this concept.*** Additionally, as we have demonstrated, ***this simple debiasing approach can serve as a foundation for various refinements***:\\n\\nFirst, **the unit of debiasing is adjustable**: As shown in Footnote 1 and Table 2, this method can be extended to subgroup-level debiasing to address special sampling scenarios---a general challenge faced by all baselines and previous dataset inference techniques.\\n\\nSecond, it can be enhanced to **higher-order debiasing** (to membership inference methods can leverage higher-order statistics) to capture the correlation between the membership prediction between records: We present an example of second-order debiasing method in the Appendix H.2 for cases where records in the target dataset are uniformly randomly sampled. However, this example method faces an increased algorithmic complexity (from $O(|X|)$ for first-order debiasing to $O(|X|^2)$ for second-order). Developing more efficient higher-order DUCI methods remains an intriguing direction for future work.\"}",
"{\"comment\": \"Thank you for your clarifications. I will keep my score.\"}",
"{\"summary\": \"The paper formalizes the problem of dataset cardinality inference, which aims to measure _how much_ of a given dataset has been used in model training. The paper shows how existing out-of-the-box membership inference methods fail to solve this problem and show how that can be remedied with de-biasing. Experimental results show the benefits of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"This paper introduces on a very important problem and gives some solid baselines to tackle it.\", \"The method, error metrics, experimental settings, baselines, and evaluations are thoughtfully designed (in general). For example, I particularly appreciate:\", \"the extra mile effort in deriving confidence intervals;\", \"the use of dataset selection methods in experimental evaluations;\", \"an analysis of why the confidence intervals are large around $p=1/2$;\", \"the experimental setting of book copyright infringement;\"], \"weaknesses\": \"The main drawback, in my opinion, is that there are approximations involved in deriving the confidence estimates, making them potentially incorrect. There appear to be two approximations (please correct me if I'm mistaken):\\n1. replacing $TPR_i$, $FPR_i$ with a single TPR/FPR across all samples, so the de-biasing is not exact;\\n2. assuming independence of $\\\\hat p_i$'s to compute the confidence intervals.\\n\\nWhile the authors empirically show in Fig 4 that the correlations in item 2 above are small, it would be nice to see that the bias induced by item 1 is also not too large.\\n\\nFinally, I would have liked to see some approaches for rigorously correct (asymptotic or non-asymptotic) confidence intervals in addition to the heuristic ones used here. I believe that the XBern confidence intervals given by [Pillutla et al](https://arxiv.org/abs/2305.18447) can be used (XBern confidence intervals for $TPR_i$ and $FPR_i$ can automatically adapt to the correlation, leading to better intervals for $\\\\hat p$).\\n\\n**Other comments**: \\n- I do not understand the derivation of footnote 1. It would be nice to expand on it (possibly in the supplement). \\n- Figure 2 can be clearer if the x axis is in log scale\\n- Missing relevant refs: [Kandpal et al](https://arxiv.org/pdf/2310.09266) for membership inference of users (groups of data) and is related to dataset inference, [Vyas et al](https://arxiv.org/pdf/2302.10870) for copyright protection, [Zhang et al.](https://arxiv.org/pdf/2406.15968) for a recent MIA\", \"questions\": [\"**Poor results around $p=0$**: The results of Table 4 show that the method is not very reliable around $p=0$. This would make it unsuitable to answer the question of _if_ a dataset has been used. Are any modifications possible to adapt the proposed method to [dataset inference](https://arxiv.org/abs/2406.06443)?\", \"Further, how does the proposed method work if our goal is to provide a multiplicative guarantee of the form that $\\\\hat p / p \\\\in (1/c, c)$? These would be more realistic in the small $p$ regime.\", \"Like differential privacy is designed to protect against membership inference, are there any provable protections against DUCI?\", \"Why do you think MIA Guess fails to work?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"To Reviewer Tzw7\", \"comment\": \"We thank the reviewer for the positive feedbacks and helpful suggestions for further improvement!\\n\\n> I don't see any major weaknesses. It would be nice to show a comparison between DUCI and SOTA \\u201cbinary\\u201d data usage algorithms for the specific case for when p=1 and p=0 to demonstrate that DUCI still has comparable performance to \\u201cbinary\\u201d data usage algorithms in these specific cases.\\n\\nThanks for the suggestion! Below is the formulation of using DUCI for the binary dataset inference problem, with results demonstrating the effectiveness of our methods in binary scenarios. We have also included this comparison in Appendix G.5 of the [revised paper](https://openreview.net/pdf?id=EUSkm2sVJ6).\\n\\n***Hypothesis Testing Formulation for DUCI in Binary Dataset Inference Problems***\\n\\nPrior binary dataset inference literature consider the binary hypothesis test to solve the problem, where the null hypothesis has a general form $H_0: s = s' + \\\\tau$ (i.e., the target model is not trained on the protected dataset). For different methods, $s$ and $s'$ can take the following forms:\\n1. **Dataset Inference [1]**: $s$ and $s'$ represent the distances to the decision boundary measured on a private dataset and a population dataset, respectively. This hypothesis assumes that if the model was trained on the private dataset, the distance measured on the private dataset would exceed that on the public dataset.\\n2. **LLM Dataset Inference [2]**: $s$ and $s'$ are the weighted aggregations of 52 MIA scores over the private and population datasets. This hypothesis assumes that merged MIA scores would be significantly higher for members than non-members over enough samples.\\n3. **Backdoor Watermarks [3]**: $s$ and $s'$ are the confidence score on the target label given backdoored inputs and given clean inputs. This hypothesis assumes that a model trained on a poisoned dataset (if successfully backdoored) will assign higher confidence to the target class when triggered, but not for clean inputs.\\n\\nRegarding DUCI, a straightforward simplification from DUCI to the dataset inference problem can be made by set $s = \\\\hat{p}$, $s' = 0$, and $\\\\tau$ serves as a hyperparameter, which may vary depending on the data type. Below, we report the performance of our method adapted for solving the binary dataset inference problem. \\n\\n**Table:** Comparison of p-values between DUCI and binary dataset usage algorithms for determining whether a dataset $X$ (size 500) has been used. The complete training dataset of the target model has a size of 25,000. For p-values, a smaller value for **Dataset Used** is better, while a larger value for **Dataset Not Used** is better.\\n\\n| **Methods** | **p-value (Dataset Used \\u2193)** | **p-value (Dataset Not Used \\u2191)** |\\n|---------------------------------------|------------------------------|-----------------------------------|\\n| Backdoor Watermark (poison 30% of $X$) | $7.10 \\\\times 10^{-5}$ | 0.334 |\\n| Backdoor Watermark (poison 100% of $X$) | $\\\\mathbf{6.18 \\\\times 10^{-54}}$ | **1.000** |\\n| Dataset Inference | $7.27 \\\\times 10^{-10}$ | 0.937 |\\n| Ours | $\\\\mathbf{3.15 \\\\times 10^{-51}}$ | **1.000** |\\n\\nConsistent with the performance shown in Figure 1, all methods can perfectly solve the binary dataset usage problem when the significance level is set to common thresholds such as 0.05 or 0.01. **For backdoor watermark methods, the main challenge lies in the successful injection of backdoor when the dataset is not fully sampled or when the protected dataset's relative size is small. This will significantly impact performance, e.g., poisoning even 30% of X performs poorly when $|X|$ is small.\\nOur method performs exceptionally well, achieving comparable results to backdoor watermarking when the entire dataset is poisoned.** In principle, the performance of Dataset Inference should be close to our method; however, the slight drop in performance may be attributed to the choice of signal, as the loss-based score is less distinguishable in distribution than the likelihood ratio-based score. \\n\\n*[1] Maini, P., Yaghini, M., & Papernot, N. (2021). Dataset inference: Ownership resolution in machine learning.*\\n\\n*[2] Maini, P., Jia, H., Papernot, N., & Dziedzic, A. (2024). LLM Dataset Inference: Did you train on my dataset?.*\\n\\n*[3] Li, Y., Zhu, M., Yang, X., Jiang, Y., Wei, T., & Xia, S. T. (2023). Black-box dataset ownership verification via backdoor watermarking.*\"}",
"{\"comment\": \"Thank you once again for your time and effort in reviewing our paper!\"}",
"{\"comment\": \"> 6. Further, how does the proposed method work if our goal is to provide a multiplicative guarantee of the form that $\\\\hat{p}/p \\\\in (1/c, c)$? These would be more realistic in the small regime.\\n\\nThis is an important and relevant question. However, according to the standard packing argument [1], providing a multiplicative guarantee for DUCI (regardless of the method used) is fundamentally impossible. A straightforward example to think about this is: in the extreme case where the ground truth $p = 0$, $\\\\hat{p}$ must also equal 0 to keep the multiplicative ratio bounded. Below, we provide a more detailed explanation.\\n\\nLet the ground truth membership probability for record $i$ in the training dataset be $p_i$. By definition, a sampled dataset generated under sampling probabilities $(p_1, \\\\ldots, p_n)$ and $(p_1 \\\\pm \\\\frac{1}{10n}, \\\\ldots, p_n \\\\pm \\\\frac{1}{10n})$ can be identical with at least constant probability $\\\\frac{9}{10}$ based on the union bound. As a result, with constant probability, a DUCI algorithm cannot reliably distinguish between datasets sampled with $(p_1, \\\\ldots, p_n)$ versus $(p_1 \\\\pm \\\\frac{1}{n}, \\\\ldots, p_n \\\\pm \\\\frac{1}{n})$, leading to an unavoidable additive error of $\\\\frac{1}{n}$ on either $(p_1, \\\\ldots, p_n)$ or $(p_1 \\\\pm \\\\frac{1}{n}, \\\\ldots, p_n \\\\pm \\\\frac{1}{n})$. Consequently, for any fixed $c \\\\geq 1$, as $p_i \\\\to 0$, the multiplicative error $\\\\frac{\\\\hat{p}_i}{p_i}$ either grows to infinity or shrinks to zero, falling outside the range of $(1/c, c)$.\\n\\nHowever, as discussed in Lines 408\\u2013410, additive error is a more consistent and meaningful metric for the DUCI problem. This is because DUCI is fundamentally a discrete counting problem, where the focus is on the number of incorrect counts, making additive error a more appropriate measure. For instance, in a small protected dataset of size 10, the unit of the additive error rate is 0.1, which consistently corresponds to a single misprediction. In contrast, using (relative) multiplicative error will lead to nonsensical results: a single misprediction when $p = 0.1$ produces the same ratio as mispredicting all 10 points when $p = 1.0$, which is clearly unreasonable.\\n\\n> 7. Like differential privacy is designed to protect against membership inference, are there any provable protections against DUCI?\\n\\nIndeed, when the training algorithm satisfies differential privacy, it is possible to prove upper bounds for the error of DUCI (still via standard packing argument [1]). Loosely speaking (as our goal here is not to derive a tight bound for DUCI), given the ground truth membership probability be $p_i$ for record $i$ in the training dataset, the sampled dataset under sampling probability $(p_1, \\\\cdots, p_{n})$ and $(p_1\\\\pm\\\\frac{1}{n\\\\varepsilon}, \\\\cdots, p_n\\\\pm\\\\frac{1}{n\\\\varepsilon})$ only differ by at most $\\\\frac{1}{\\\\varepsilon}$ records in expectation by definition. Thus under $\\\\varepsilon$-DP training algorithm, with constant probability any adversary could not distinguish between the datasets sampled by $(p_1, \\\\cdots, p_n)$ and $(p_1\\\\pm\\\\frac{1}{\\\\varepsilon n}, \\\\cdots, p_n\\\\pm \\\\frac{1}{\\\\varepsilon n})$. This in turn, causes an inevitable MAE of $\\\\frac{1}{\\\\varepsilon n}$ for estimating the dataset usage $\\\\frac{1}{n}\\\\sum_ip_i$ under $\\\\varepsilon$-DP training algorithm. It is an interesting open question regarding the connections between DP auditing via DUCI versus prototypical DP auditing experiment (e.g., via repeated retraining runs).\\n\\n*[1] Hardt, M., & Talwar, K. (2010, June). On the geometry of differential privacy.*\\n\\n> 8. Why do you think MIA Guess fails to work?\\n\\nThe high-level reason, as discussed in Section 3.2 *Errors in Optimal Point-Wise Membership Inference*, is that per-point MIA can make errors. These errors may arise from the intrinsic randomness of the algorithm or inability to capture precise membership information from model outputs. Under DUCI, these errors accumulate across the training set, causing the aggregated MIA guess to deviate significantly from the true $p$. The specific reasons of these errors are not fixed as discussed in prior works [2,3].\\n\\nSome sources of error are intertwined. For instance, when the score used for MIA is challenging to be perfectly normalized across data points, the optimal threshold may vary for different points. In such cases, naive threshold sweeping becomes less effective. Methods like RMIA, which apply thresholding to the rank of score within the population distribution rather than directly thresholding the score, can be more robust in these scenarios.\\n\\nHowever, the objective of this work is not to design the best MIA but to debias any given MIA to perform effectively in DUCI. As such, we do not delve into the specific limitations of existing MIAs.\\n\\n*[2] Aubinais, E., Gassiat, E., & Piantanida, P. (2023). Fundamental Limits of Membership Inference Attacks on Machine Learning Models.*\\n\\n*[3] Maini, P., et al. (2024). LLM Dataset Inference: Did you train on my dataset?*\"}",
"{\"title\": \"To Reviewer THsi\", \"comment\": \"We sincerely thank the reviewer for their valuable comments, which have helped improve the clarity of our work. We have addressed all feedback in the [revised paper](https://openreview.net/pdf?id=EUSkm2sVJ6), with the major revisions including:\\n\\n> Improved clarity of Section 2.1\\n\\na. Added definition of $\\\\theta(x)_y$ in Line 98\\n\\nb. Made the definition of reference model, specific number of population data and how they are used, computation details of RMIA clear in Lines 108-115\\n\\nc. Removed the expression that could potentially cause confusion in Lines 285\\u2013286.\\n\\n> Dependence/correlation of records is handled in a confusing manner.\\n\\nThank you for pointing out the potential confusion. We have improved the clarity of Lines 489-497 in the revised version to separate these two different \\\"correlation\\\" terms:\\n\\n1. **Correlation in Lines 444\\u2013448 or Footnote 1**: \\n The term $\\\\frac{\\\\text{Corr}_i(\\\\text{TPR}_i - \\\\text{FPR}_i, p_i)}{\\\\text{TPR} - \\\\text{FPR}}$ refers to the \\\"correlation\\\" between the **value** of ground-truth sampling probability $p_i$ of record $i$ and the value of $\\\\text{TPR}_i - \\\\text{FPR}_i$. Here, we (slightly abusively) use the term \\\"correlation\\\" instead of \\\"covariance\\\" because neither $p_i$ nor $\\\\text{TPR}_i - \\\\text{FPR}_i$ are random variables. Note that, although the value of $p_i$ may have a correlation with $\\\\text{TPR}_i - \\\\text{FPR}_i$ for each record $i$, in the DUCI pipeline, the $p_i$ is always a fixed constant, and each record $i$ is Bernoulli-sampled according to $p_i$. A more detailed explanation of this has been added to Appendix D.\\n\\n2. **Correlation in Lines 490\\u2013491**: \\n This refers to the correlation between the membership probability predictions $\\\\hat{p}_i$ and $\\\\hat{p}_j$ for different records $i$ and $j$ in the dataset. Specifically, \\\"close-to-zero correlation\\\" here means the membership prediction for one record $i$ does not affect the prediction for another record $j$, i.e., $\\\\mathbb{E}[\\\\hat{p}_i \\\\hat{p}_j] = \\\\mathbb{E}[\\\\hat{p}_i]\\\\mathbb{E}[\\\\hat{p}_j]$ for any $i, j \\\\in [|X|]$.\\n\\nWe hope these clarifications resolve the confusion.\\n\\n> 3. Why assuming some joint/mean logits follow a normal distribution? Why the MLE with joint logits is presented as an idealized baseline?\\n\\nThe motivation and rationale behind the MLE baseline design are threefold. Below, we explain them and will make then clearer in our revision.\\n\\n**Rationale for using MLE**\\nThe likelihood ratio test is theoretically proven to be optimal for binary hypothesis testing by Neyman-Pearson lemma [1]. Given that DUCI can naturally be framed as a multi-hypothesis testing problem when the granularity of dataset usage inference is known (we can assume the idealized baseline has access to the granularity of dataset usage, providing it with more information than our method), MLE serves as a natural extension of the pairwise likelihood ratio test to address the DUCI problem.\\n\\n**Rationale for assuming joint/mean logits follow normal distributions** \\nApproximating logits as Gaussian distributions to compute likelihoods is a widely adopted and effective practice in membership inference literature (even in LiRA [2]). This practice is supported by both empirical evidence and theoretical applications, which show that logits often exhibit Gaussian-like behavior across diverse model architectures, data types, and tasks [2, 3, 4]. Therefore, we adopted this practice to compute likelihoods to construct a reasonable and robust baseline that aligns with proven methodologies. To the best of our knowledge, no alternative approaches currently provide apparently better performance or comparable simplicity.\\n\\n**Necessaty of considering MLE with joint logits** \\nWhile Avg-Logit MLE perform well empirically, averaged statistics inherently introduce information loss. Therefore, it is necessary to include the Joint-Logits MLE as a performance baseline for a lossless scenario. In an idealized setting, where the ground-truth distribution of logits is known, using the joint logits is the most informative choice. However, its empirical sub-optimal performance can be attributed to the curse of dimensionality, i.e., high-dimensional observations (joint logits) typically exhibit a lower signal-to-noise ratio compared to one-dimensional observations (averaged logits).\\n\\n[1] Neyman, J., & Pearson, E. S. (1933). IX. On the problem of the most efficient tests of statistical hypotheses.\\n\\n[2] Carlini, N., Chien, S., Nasr, M., Song, S., Terzis, A., & Tramer, F. (2022, May). Membership inference attacks from first principles.\\n\\n[3] Top-n\\u03c3: Not All Logits Are You Need, Chenxia et al, 2024\\n\\n[4] Lee, J., Xiao, L., Schoenholz, S., Bahri, Y., Novak, R., Sohl-Dickstein, J., & Pennington, J. (2019). Wide neural networks of any depth evolve as linear models under gradient descent.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}",
"{\"comment\": \"> 2. Levearging the (asymptotic) XBern confidence intervals given by Pillutla et al to derive a confidence interval applicable to correlation.\\n\\nWe believe that deriving a tight confidence interval (CI) without assuming independence among membership predictions $\\\\hat{p}_i$ would definitely be informative. However, reliably estimating correlations under DUCI is challenging. ***In DUCI, our goal is to provide a CI for the proportion prediction of a given target model, rather than for the training algorithm used to train that model. This represents the key difference\\u2014and the source of difficulty in estimating correlations\\u2014between our setting and multiple-tests privacy auditing. Specifically, the correlation $\\\\mathbb{E}[x_1x_2] - \\\\mathbb{E}[x_1]\\\\mathbb{E}[x_2]$ in [1] is typically measured over the randomness of the training algorithm (i.e., empirically calculated across $N$ models trained with the same algorithm). In our case, however, $n = 1$, as each trial involves only a single target model, making it practically infeasible to capture such correlations.***\\n\\n**Theoretical Applicability of XBern CI**\\n\\nThe XBern CI is theoretically applicable to DUCI if a randomization algorithm $\\\\mathcal{A}$ is applied to the membership prediction vector $\\\\hat{\\\\mathbf{m}} = [m_i]_{i=1}^{|X|}$. This shuffled membership prediction vector follows the XBern distribution because the shuffling process ensures that each element in the vector becomes an exchangeable Bernoulli sample. Furthermore, the Wilson condition used in the asymptotic XBern CI derivation is a more relaxed condition than the Lyapunov condition in our paper (which we prove to hold in Appendix E.2). Therefore, theoretically, the XBern CI is computable in our setting.\\n\\n\\n**Empirical Challenges with XBern CI**\\n\\nEmpirically, however, the XBern CI produces overly loose and misleading results in our context due to the above-mentioned reasons. For instance, as shown in [Figure 1](https://drive.google.com/file/d/1Sp47vWqkurs16ZbzsxiQpLFyBIIvf53x/view?usp=sharing), when we derive a 95\\\\% confidence interval using Proposition 11 from [1], the CI is so loose that all ground-truth $p$ values fall within the interval with probability = 1. More concerningly, the generated CI often has a length greater than 1, which is meaningless since the proportion $p$ is bounded in $[0,1]$. For a sanity check of the XBern CI implementation, we tested its performance by comparing it to our Lyapunov-based CI under settings where they are theoretically equivalent (i.e., using $n = |X|$ duplicated models so that correlation = 0). We observed from [Figure 2](https://drive.google.com/file/d/1efQVcqN9l8WBfR1bs8x0a4GFWcL5Taec/view?usp=sharing) that the First-Order CI produced by XBern was almost identical to our Lyapunov-based CI.\\n\\nWe believe that incorporating statistical guarantees into DUCI while accounting for correlations between points in one run would be an interesting direction for future work. However, as also noted in [1], under natural datasets (not specifically crafted to induce correlations), the correlation between points tends to be small. Empirically, our CI is tight and informative: (1) ground-truth $p$ values fall within the CI with around 95\\\\% probability, and (2) its length, along with the MAE (standard deviation), reflects a concentrated Gaussian distribution, indicating that the predicted values are closely aligned with the ground-truth $p$.\\n\\n[1] Pillutla, K., Andrew, G., Kairouz, P., McMahan, H. B., Oprea, A., & Oh, S. (2024). Unleashing the power of randomization in auditing differentially private ml.\"}",
"{\"comment\": \"> 4. What is the the sampling error mentioned in Line 228 in this particular setting?. What is the definition of weak independence in Line 269?.\\n\\nThe sampling error mentioned in Line 228 refers to the standard deviation of the empirical estimates of $P(\\\\hat{m} = 1 \\\\mid m = 0)$ and $P(\\\\hat{m} = 1 \\\\mid m = 1)$ under dataset-level debiasing, over the randomness of $\\\\theta_j$ and the MIA algorithm. We have clarified this meaning in Line 228. Additionally, we have replaced the term \\\"weakly independent\\\" with \\\"approximately independent\\\" to avoid confusion. Line 269 states that we empirically observe near-zero covariance between membership probability predictions $\\\\hat{p}_i$ and $\\\\hat{p}_j$ for any $i, j \\\\in [|X|]$.\"}",
"{\"metareview\": \"This submission introduces Dataset Usage Cardinality Inference (DUCI): a framework for modeling and inference of the proportion of data used when training a model. The authors (elegantly) motivate the problem in terms of the US Copyright Act, provide statistical guarantees for DUCI, and illustrate their approach on image and text datasets.\\n\\nThe reviewers agreed that the paper is well-written and the problem is timely. They also raise several issues and questions surrounding notation, which were appropriately addressed in the rebuttal. This is a nice contribution that extends the literature on membership inference attacks in a non-trivial direction.\\n\\nI encourage the authors to seriously consider the reviewers' comments when preparing the final version of the manuscript, specifically those related to the presentation of the experiments and the theoretical results.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers remained positive about the paper after the rebuttal. The points raised by the reviewers in terms of presentation, experiments, and theory were appropriately addressed in the rebuttal.\"}",
"{\"summary\": \"In this manuscript the authors identify key issues of current techniques that aim to ascertain if a dataset was used to train a Machine Learning model. To alleviate these problems:\\n\\n1. The authors formally define the concept of Data Usage Cardinality Inference (DUCI). The authors state that, compared to other binary types of inference, DUCI better reflects real world scenarios, where models are trained on fractions of different datasets. \\n2. The authors propose a way of de-biasing current models that estimate individual membership, i.e. if one individual sample was part of the training dataset. Then, they propose to use these unbiased estimators to compute the overall proportion of the dataset used for training. They also present an asymptotic method to design a confidence interval for this overall proportion used for training. \\n\\nFinally, the authors provide some numerical experiments where they compare the proposed procedure with four other adapted techniques that also perform DUCI. Throughout their experiments, the authors' de-biased method outperforms the other four techniques.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Identifying the dataset used to train a Machine Learning model could have a direct impact on privacy rights or copyright infringement, as mentioned in the Introduction. Hence, I think that this article deals with a relevant problem. I also appreciate that their proposed procedure is cost-effective and intuitive. In my opinion, there is a lot of merit in noticing that Member Identification methods suffer from biases and then presenting an straight-forward tool to address this issue.\", \"weaknesses\": \"I think that the main weakness here is the presentation. A lot of times the authors describe mathematical objects by vaguely saying what they are or make rushed arguments. However, this approach is not intuitive enough to give any insight about the matter nor formal enough to have any actual meaning. This overshadows the interesting contributions made in this paper.\\n\\nFrom Lines 108-115, I wonder what is \\\"a number of population data\\\", what is $\\\\theta(x)_y$ (as this is the first time they use this notation with y as a subscript; in fact, what is y?). What is the reference model modelling, i.e. are these models for membership inference or are these models that represent a real world classifier or regressor?. In Line 285-286, it is difficult to understand what \\\"the probability of observing that i-th record\\u2019s likelihood to be a member is greater than randomly sampled population data points\\\" means. This is not even relevant to understand the paper main contributions, so it should be remove it the authors are not willing to explain it clearly or should be rewritten, if they prefer to do so. \\n\\nDependence/correlation of records is handled in a confusing manner. In particular, the authors pose the question \\\"Will the ignorance of \\u201ccorrelations\\u201d between records make our method sub-optimal?\\\" in Lines 490-491. The answer here is clearly \\\"Yes\\\", as the authors themselves have stated in Lines 444-448 that under special sampling one should divide the dataset into subgroups and then de-bias using the TPR and FPR within each subgroup. However, this additional step, which accounts for possible high-correlation, is not carefully mentioned in Section 4, so I would not assume that this is a fundamental part of their proposed technique. However, it reads as if Lines 489-497 argue that correlation between records is not an issue and that there seems to be limited potential to improve their method in this regard. Maybe the authors here are considering different methods of sampling or different settings but this is not clearly stated in Lines 489-497. This is something that should be addressed. \\n\\nRegarding the numerical experiments, there are two things to consider: two methods were adapted from Individual Membership Attacks and the other two baseline estimators were inspired by maximum likelihood estimation but rely on additional modeling decisions, like assuming some joint/mean logits follow a normal distribution. Although this choice is based on a theoretical result, as mentioned in Line 298, it is not clear to me that this assumption would not hinder the performance of these baselines. The authors do mention in Line 276 that they use MIA Guess and MIA Score \\\"To demonstrate the importance of debiasing [...]\\\", but I think they would need to address why the MLE with joint logits is presented as an idealized baseline. At least the MLE with average logits has good performance in various experiments, so there is evidence in favor of presenting it as an idealized baseline. However, I feel like the experiments in this paper do indicate that the proposed method performs well, under the scenarios considered here.\", \"questions\": \"What is the the sampling error mentioned in Line 228 in this particular setting?.\\nWhat is the definition of weak independence in Line 269?.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you so much for your support, your efforts in reviewing our paper and rebuttals, and your willingness to raise the score! We really enjoy the discussion, and all the questions are both interesting and valuable. We will definitely incorporate the impossibility of multiplicative guarantees and further analysis on the different behaviors of text vs. image data into our revision. Especially, regarding the distinguishability of text samples, we will add practical experiments (e.g., with carefully designed different overlap levels between text sequences) to nuance the argument.\"}",
"{\"comment\": \"Thank you for addressing my questions. It would be great to add the discussion of (1) into the paper. I have updated my score.\"}",
"{\"comment\": \"I appreciate the detailed and careful responses of the authors and the painstaking additional comparisons! I have raised my score accordingly.\\n\\nSome further comments (it is enough for the authors to think about these and address them in the revision; no need to reply here):\\n\\n**Text vs. image data**: I agree that it can be true for some settings but I do not fully buy this argument. For typical language modeling tasks, each \\\"example\\\" is an entire sequence (which can be several 1000s of tokens long). Even though two books can have meaningful phrase-level overlap, I would expect enough differences in such long sequences that it would be possible to notice a difference. While the empirical observation is very interesting, the authors may wish to nuance their argument.\\n\\n**Impossibility of multiplicative guarantees**: This is super interesting, it would be great to add this to the paper somewhere.\\n\\nThanks again and all the best!\"}",
"{\"comment\": \"Thank you for your support! We will definitely include the valuable discussion of (1) into our paper. Once again, we deeply appreciate the thoughtful questions you have raised and the time you have dedicated to reviewing both our paper and the rebuttal.\"}",
"{\"summary\": \"This paper presents a new variation of membership attack: estimating the fraction of data from training set. Unlike normal MIA, which determine individual membership, the proposed task estimate the data usage directly. The proposed algorithm is based on the fact that the unbiased data usage estimator can be written as a function of FPR and TPR (Eq. 6) of MIA. In other words, the proposed estimator can adapt any existing MIA attack with FPR and TPR evaluation. The experiments demonstrate the effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper introduces a new task in privacy attacks. The main contribution is a practical and scalable data usage estimator that could encourage further research in this area.\", \"weaknesses\": \"My main concern is that this method requires known training set for estimating FPR and TPR. In practice, the train set is usually private (see following reference Zhang at el 2024).\\n\\n\\n\\nZhang, Jie, Debeshee Das, Gautam Kamath, and Florian Tram\\u00e8r. \\\"Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data.\\\" arXiv preprint arXiv:2409.19798 (2024).\", \"questions\": \"1. In Figure 3, the length of confidence interval seems to be large compared to the absolute error. Is this true?\\n\\n2. Would this debiasing method downgrade the test power?\\n\\n3. Is there any connection with auditing differential privacy?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"To Reviwer mJ1c\", \"comment\": \"Thank you for your thoughtful comments and interesting questions! We greatly appreciate their quality and have provided detailed answers to each one below.\\n\\n> 1. Analysis on the approximation error in the debiasing process and the improved clarity of the derivation in Footnote 1\\n\\nOur analysis shows that **the simplification in the debiasing process will not introduce errors in many practical sampling scenarios, such as uniform or i.i.d. sampling (which are the common scenario considered in the long line of prior works in binary dataset inference literature listed in the Introduction section). Only in special cases where there is a strong correlation between the probability of sampling $i$-th point $p_i$ and its $TPR_i - FPR_i$, there would be an error. However, this can be effectively mitigated by subgroup debiasing, as shown in Table 2.** We next present the detailed analysis (which is also what footnote 1 analyzed), and we have added details of footnote 1 in the Appendix D. \\n\\nIn Equation (7) and (8), we leverage dataset-level $\\\\text{TPR}$ and $\\\\text{FPR}$ (i.e., $\\\\text{TPR} = \\\\frac{1}{|X|} \\\\sum_i \\\\text{TPR}_i$ and $\\\\text{FPR} = \\\\frac{1}{|X|} \\\\sum_i \\\\text{FPR}_i$) to replace the individual $\\\\text{TPR}_i$ and $\\\\text{FPR}_i$. This simplification avoids the computationally cost (or large sampling errors) of debiasing each $\\\\hat{p}_i$. This simplification is justified because the proportion $p$ is a dataset-level statistic, as analyzed below. To avoid confusion, we introduce $\\\\tilde{p}$ and $\\\\tilde{p}_i$ as the estimators under dataset-level debiasing, and we next prove $\\\\tilde{p} = \\\\frac{1}{|X|} \\\\sum_i \\\\tilde{p}_i = \\\\frac{1}{|X|} \\\\sum_i \\\\frac{\\\\hat{m}_i - \\\\text{FPR}}{\\\\text{TPR} - \\\\text{FPR}}$ is an unbiased estimator of $p$ whenever a correlation term between $\\\\text{TPR}_i - \\\\text{FPR}_i$ and $p_i$ is zero: Given\\n\\\\begin{align}\\n \\\\mathbb{E}[\\\\tilde{p}] = \\\\mathbb{E}\\\\left[\\\\frac{1}{|X|} \\\\sum_i \\\\tilde{p}_i\\\\right] = \\\\frac{1}{|X|} \\\\sum_i \\\\mathbb{E}[\\\\tilde{p}_i] = \\\\frac{1}{|X|}\\\\sum_i \\\\mathbb{E}\\\\left[\\\\frac{\\\\hat{m}_i - \\\\text{FPR}}{\\\\text{TPR} - \\\\text{FPR}}\\\\right]\\n\\\\end{align}\\nPlugging Equation (5) into the above equation, we can get:\\n\\\\begin{align}\\n \\\\mathbb{E}[\\\\tilde{p}] & = \\\\frac{1}{\\\\text{TPR} - \\\\text{FPR}} \\\\cdot \\\\frac{1}{|X|} \\\\sum_i \\\\left(p_i \\\\cdot \\\\text{TPR}_i + (1 - p_i) \\\\cdot \\\\text{FPR}_i - \\\\text{FPR}\\\\right) \\\\\\\\\\n & = \\\\frac{\\\\frac{1}{|X|} \\\\sum_i \\\\left[p_i \\\\cdot (\\\\text{TPR}_i - \\\\text{FPR}_i)\\\\right]}{\\\\frac{1}{|X|} \\\\sum_i \\\\left[\\\\text{TPR}_i - \\\\text{FPR}_i\\\\right]}\\n\\\\end{align}\\nGiven $p = \\\\frac{1}{|X|}\\\\sum_i p_i$, note that \\n\\\\begin{align*}\\n \\\\frac{1}{|X|} \\\\sum_i \\\\left[p_i \\\\cdot (\\\\text{TPR}_i - \\\\text{FPR}_i)\\\\right] = \\\\frac{1}{|X|} \\\\sum_i p_i \\\\cdot \\\\frac{1}{|X|} \\\\sum_i (\\\\text{TPR}_i - \\\\text{FPR}_i) + \\\\text{Corr}_i(\\\\text{TPR}_i - \\\\text{FPR}_i, p_i).\\n\\\\end{align*}\\n(Here, we use the term \\\"correlation\\\" instead of \\\"covariance\\\" because $\\\\text{TPR}_i - \\\\text{FPR}_i$ and $p_i$ are not random variables.) Thus, we have:\\n\\\\begin{align}\\n \\\\mathbb{E}[\\\\tilde{p}] = p + \\\\frac{\\\\text{Corr}_i(\\\\text{TPR}_i - \\\\text{FPR}_i, p_i)}{\\\\text{TPR} - \\\\text{FPR}}.\\n\\\\end{align}\\nThe correlation term suggests that for many practical sampling methods (e.g., uniform sampling, i.i.d. sampling), this simplification results in an unbiased estimator for $p$ because the correlation is 0. For specialized sampling methods, subgroup debiasing can ensure (empirical) unbiasedness, as discussed in Table 2, by making $\\\\text{TPR}_i - \\\\text{FPR}_i$ constant within each subgroup. This ensures that the correlation term for each subgroup is 0, providing a group-level debiasing approach. Note that the term \\\"correlation\\\" (slightly abused here) is used in the context of how the value of $p_i$, a pre-fixed constant in the DUCI pipeline, is determined. This is distinct from the correlation between membership predictions in Figure 4.\\n\\n> Figure 2 can be clearer if the x axis is in log scale; Missing relevant refs: Kandpal et al for membership inference of users (groups of data) and is related to dataset inference, Vyas et al for copyright protection, Zhang et al. for a recent MIA\\n\\nThanks for the suggestions. We have updated in [revised version](https://openreview.net/pdf?id=EUSkm2sVJ6).\\n\\n>\"}"
]
} |
EUBMPmcCWQ | PLS-based approach for Fair Representation Learning | [
"Elena M. De-Diego",
"Adrian Perez-Suay",
"Paula Gordaliza",
"Jean-Michel Loubes"
] | We revisit the problem of fair representation learning by proposing Fair Partial Least Squares (PLS) components. PLS is widely used in statistics to efficiently reduce the dimension of the data by providing representation tailored for the prediction. We propose a novel method to incorporate fairness constraints in the construction of PLS components. This new algorithm provides a feasible way to construct such features both in the linear and the non linear case using kernel embeddings. The efficiency of our method is evaluated on different datasets, and we prove its superiority with respect to standard fair PCA method. | [
"Fair Representation Learning",
"PLS",
"Supervised Learning",
"Dimension Reduction",
"Fairness"
] | Reject | https://openreview.net/pdf?id=EUBMPmcCWQ | https://openreview.net/forum?id=EUBMPmcCWQ | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"s65nN0zymn",
"d6YVInqWaZ",
"SQI1hvE3WL",
"Q3QocBOPC2",
"HCho2gv8xk",
"CXXbVuRZef",
"7jqUJ2KUK6"
],
"note_type": [
"decision",
"official_comment",
"meta_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1737524163894,
1732740848011,
1734669636089,
1730606283243,
1730650358455,
1730126431254,
1730588603743
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12056/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12056/Area_Chair_ht8h"
],
[
"ICLR.cc/2025/Conference/Submission12056/Reviewer_7pxk"
],
[
"ICLR.cc/2025/Conference/Submission12056/Reviewer_4RfV"
],
[
"ICLR.cc/2025/Conference/Submission12056/Reviewer_mavE"
],
[
"ICLR.cc/2025/Conference/Submission12056/Reviewer_qbUD"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Answer to Reviewer mavE\", \"comment\": \"Thank you for your detailed feedback and for acknowledging the contributions of our work. We greatly appreciate your emphasis on the importance of robust empirical validation and will carefully consider your suggestions to further strengthen this aspect.\\nWe would like to respectfully address the point regarding the theoretical contribution of our work. While we understand the reviewer's perspective, we believe that our approach is indeed theoretically significant, as it introduces the first fair model within the Partial Least Squares (PLS) framework\\u2014an established and widely utilized applied method. Our approach is formal, thoroughly grounded in theory, and rigorously explained.\\nWhile the primary focus of our paper was to compare methods within similar settings (e.g., orthogonal components), we acknowledge the value of broader comparisons. In light of this, we included comparisons with fairPCA and made our source code available, enabling easy and fair evaluations of our model on standard benchmark datasets, thereby providing a more comprehensive assessment of its effectiveness in bias mitigation.\\nAdditionally, we would like to highlight two points: First, our experiments were conducted using a representative set of datasets commonly used in fairness research. Second, as no alternative dataset suggestions were offered by the reviewers, we believe that the datasets selected were the most appropriate given the available resources.\"}",
"{\"metareview\": \"This paper studies fair representation learning for tabular data. It works in a setting with all data has labels y and sensitive attribute s. The goal is learn a projection of original data matrix such that is maintains the information related to y while minimizing the dependence on sensitive attribute. It builds on partial least squares method by adding fairness regularization. Algorithms are proposed to compute the solution and extension is considered for the kernel setting. In addition, the paper also discusses the application in LLMs.\\n\\nThe main concern in the review and the discussion phase is that the paper is weak in terms of empirical studies. In particular, the paper does not compare to any baseline algorithms mentioned in the related work section and potentially others mentioned by the reviewers. By considering the content of the paper, reviews and concerns, AC recommends a rejection. The authors should consider improving their paper in terms of empirical studies.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers have engaged with authors during the discussion period. However, the main concern about the weak experiments remain.\"}",
"{\"summary\": \"This paper studies fair representation learning. Specifically, it combines Partial Least Squares (PLS) with an additional fair criteria that characterizes the covariance dependence between the new data representation and the the demographic attribute $S$. The linear and non-linear cases are both considered. Finally, the proposed algorithm is tested on different datasets and performs better than the standard fair PCA method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow, with a clearly presented mathematical part.\\n2. Interpretations and general thoughts are provided. \\n3. A possible relation to fairness in LLM is discussed.\", \"weaknesses\": \"Although I appreciate the presentation of the work, I have the following concerns.\\n1. The empirical comparison to previous fair PCA is only tested on one dataset (Adult Income) and provided in the Appendix. It's not convincing the proposed work will outperform previous work in most cases.\\n2. Although possible application to fairness in LLM is discussed, it's rather superficial. Unless you have done experiments on LLM to measure fairness, I do not suggest this as a separate section in the main paper. \\n3. The experiments are conducted on simple tabular data. Will it be possible to test on more complex and high-dimensional datasets because you are considering dimension reduction? \\n4. Several related works in fair representation learning are missing for discussion [1,2,3]. \\n\\n[1] Kim, Jin-Young, and Sung-Bae Cho. \\\"Fair representation for safe artificial intelligence via adversarial learning of unbiased information bottleneck.\\\"\\u00a0_SafeAI@ AAAI_. 2020.\\n\\n[2] Shui, Changjian, et al. \\\"Fair representation learning through implicit path alignment.\\\"\\u00a0_International Conference on Machine Learning_. PMLR, 2022.\\n\\n[3] Zamani, Amirreza, Borja Rodr\\u00edguez-G\\u00e1lvez, and Mikael Skoglund. \\\"On information theoretic fairness: Compressed representations with perfect demographic parity.\\\"\\u00a0_arXiv preprint arXiv:2408.13168_\\u00a0(2024).\", \"questions\": \"See previous section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes a new algorithm for fair representation learning, specifically within the framework of decomposition methods where fairness constraints are imposed over the learned components. The authors address two cases: Where fairness regularization is based on the covariance between the projected data and the sensitive attribute. And, where regularization is based on the Hilbert-Schmidt Independence Criterion (HSIC).\\nThe paper also conducts a thorough evaluation of the resulting representations.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written with a clear introduction.\", \"Fair representation is an important problem, particularly in high-dimensional data where many representation learning methods fail.\", \"The paper utilization of a regularization parameter, that seems to effectively explores the accuracy-fairness front , is a big advantage.\", \"The method is simple and straightforward.\", \"The inclusion of the Equality of Odds constraint is a valuable addition.\", \"The experiments are comprehensive and include evaluation of the representation itself.\"], \"weaknesses\": \"* The paper suggests using Gradient Descent for optimization without discussing convexity. Even empirical testing would be valuable - such as showing convergence over epochs to identify patterns reflecting non-convex problems (like bumps).\\n* Section 4.3 appears disconnected and seemingly unrelated, particularly as it doesn't address all challenges in seq-to-seq fairness problems, only covering the relatively straightforward fact that text can be encoded.\\n* A main concern is that no comparison to other methods exists in the main text. Given this paper doesn't focus on groundbreaking theoretical results (fair decomposition methods are not new), such comparisons should be of utmost importance.\\nFor example, for fair representation there are more than a few algorithms already addressed in this paper, as well as other leading candidates like [1] or [2]. This is specifically true for the fair-classification setup where many libraries have been published, like IBM's [3] and [4] to name a few.\\n\\n[1] \\\"Fair Normalizing Flows\\\"\\n[2] \\\"Efficient Fairness-Performance Pareto Front Computation\\\"\\n[3] \\\"AI Fairness 360\\\"\\n[4] \\\"Aequitas: A bias and fairness audit toolkit\\\"\", \"questions\": [\"In lines 331-333, optimization details are missing. Additionally, it's unclear if this is the algorithm used in the evaluation section\", \"Does the data contain any preprocessing except normalization?\"], \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns']\", \"details_of_ethics_concerns\": \"The paper presents a new algorithm for fair machine learning, which can be used in downstream applications.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper provides a fair representation learning framework by utilizing the technique of PLS with fairness constraint. It leverages the inherent benefits of PLS, such as extracting useful information from the original features in a lower-dimensional space. The paper provides two versions of the framework: linear and non-linear (the latter by applying reproducing kernels, making it more suitable for feature spaces of arbitrarily large dimensionality).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The logic of the paper is clear, and the notations are well-defined. The proposed method is solid in its mathematical formulation. The authors incorporate two different fairness constraints (demographic parity and equalized odds) into the proposed method. The extension of the method to LLMs (though briefly covered), which overcomes the limitations of transforming the linear layer with SVD, is inspiring.\", \"weaknesses\": [\"My main concern is with the empirical results and the settings.\", \"The learned fair representations in the downstream tasks are evaluated using different target models but are not compared with other baseline methods, which makes the effectiveness of the proposed methods less convincing. The authors need to demonstrate that through PLS, the learned fair representations can achieve better accuracy, fairness, or efficiency compared to other methods (for example, using VAE or other disentanglement methods).\", \"The chosen datasets are all tabular, and the dimensionality is not very high. Therefore, I am concerned about the proposed method\\u2019s performance with high-dimensional data (e.g., image data).\", \"The motivation for the proposed method is weak. Although incorporating fairness constraints into PLS is a novel attempt, the justification for using PLS is not strong. In the introduction (lines 121-141), the authors introduce existing fair representation learning methods but do not sufficiently justify the benefits of using PLS. The only comparison made is with PCA, stating that PLS-built features are more accurate than PCA components, which is not enough to fully support the choice of PLS.\", \"In the paragraph (lines 274-283), the authors explain why Fair PLS cannot be formulated in closed form. As a result, the algorithm requires iterations to solve for the weight $w_h$ . In each iteration, it requires the re-computation of the eigenvectors, when the dimension is large, this would increase computation cost and decrease the efficiency.\"], \"questions\": \"Is there any convergence issue or analysis with the proposed method when the original data has high dimensionality? For example, during the optimization iterations, could the method get stuck in local minima, leading to convergence problems?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a method for fair representation learning based on Partial Least Squares (PLS). The proposed approach employs supervised learning, requiring input data along with sensitive attributes $S$ and target label $Y$. The goal is to project the input data into a $k$-dimensional subspace that maximizes the covariance with $Y$ while minimizing the covariance with $S$. The method is implemented as a linear projection optimized via gradient descent and is further extended to non-linear kernel projections using Hilbert-Schmidt Independence Criterion (HSIC).\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"This paper studies an important problem of fair representation learning.\", \"weaknesses\": \"I have the following questions and concerns regarding the contribution and evaluation of the work:\\n\\n1. **Motivation for PLS-based Framework:** It would be helpful for the authors to clarify the motivation behind selecting PLS as the foundation for their framework. There are various types of approaches for fair representation learning, such as adversarial learning [1], disentanglement [2], and distribution alignment [3]. What are the specific advantages of a PLS-based approach in comparison to these methods?\\n\\n2. **Applicability and Practical Constraints:** The proposed method requires annotations for both the target label $Y$ and the sensitive attribute $S$, which can limit practical applications. Compared to existing approaches in unsupervised fair representation learning or those that do not rely on sensitive attribute annotations (such as [4]), it would be beneficial for the authors to further clarify the unique advantages of their method, potentially in terms of efficiency, theoretical guarantees, or effectiveness.\\n\\n3. **Evaluation and Comparison:** The evaluation of the proposed method is based on relatively small datasets and lacks comparisons with related approaches. The current experiments seem to focus on applying the learned representations to various classifiers (like in Figure 1) rather than comparing with alternative representation learning methods. The lack of thorough evaluation and comparison makes it challenging to validate the effectiveness of the proposed method. \\n\\n4. **Extension to LLMs:** It's helpful that the authors discussed in Section 4.3 about extending their method to Large Language Models (LLMs), where their method could decompose the CLS-embedding from a transformer encoder for fairness constraints. However, it's suggested that the authors could provide a more detailed discussion (with math formulation) in this section and conduct experiments to validate this extension.\\n\\n5. **Coupling Issue in Fair Representation Learning:** Fair representation learning often involves a trade-off between fairness constraints and downstream task performance. It would be insightful if the authors could discuss how their method might address or mitigate this issue.\\n\\n6. (Minor point on notations) In the previous context, the authors use x to refer to the input data, but in Section 4.3, x refers to the transformed data in the latent space. The authors may use a different symbol to avoid confusion.\\n\\n\\n[1] Madras, David, et al. \\\"Learning adversarially fair and transferable representations.\\\" International Conference on Machine Learning. PMLR, 2018.\\n\\n[2] Balunovic, Mislav, Anian Ruoss, and Martin Vechev. \\\"Fair Normalizing Flows.\\\" International Conference on Learning Representations. 2022.\\n\\n[3] Creager, Elliot, et al. \\\"Flexibly fair representation learning by disentanglement.\\\" International conference on machine learning. PMLR, 2019.\\n\\n[4] Chai, Junyi, and Xiaoqian Wang. \\\"Self-supervised fair representation learning without demographics.\\\" Advances in Neural Information Processing Systems 35 (2022): 27100-27113.\", \"questions\": \"It would be helpful if the authors could clarify my above questions regarding the contribution and evaluation of the work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
EUAxxrxOM8 | Model predictive control is almost optimal for restless bandits | [
"Dheeraj Narasimha",
"Nicolas Gast"
] | We consider the discrete time infinite horizon average reward restless markovian bandit (RMAB) problem. We propose a model predictive control based non-stationary policy with a rolling computational horizon $\tau$. At each time-slot, this policy solves a $\tau$ horizon linear program whose first control value is kept as a control for the RMAB. Our solution requires minimal assumptions and quantifies the loss in optimality in terms of $\tau$ and the number of arms, $N$. We show that its sub-optimality gap is $O(1/\sqrt{N})$ in general, and $\exp(-\Omega{N})$ under a local-stability condition. Our proof is based on a framework from dynamic control known as dissipativity. Not only is our solution easy to implement but performs very well in practice when compared to the state of the art. Further, both our solution and our proof methodology can easily be generalized to more general constrained MDP settings and should thus, be of great interest to the burgeoning RMAB community. | [
"Restless Multi-Armed Bandits",
"Markov Decision Processes",
"Constrained Optimization",
"Stochastic Control"
] | Reject | https://openreview.net/pdf?id=EUAxxrxOM8 | https://openreview.net/forum?id=EUAxxrxOM8 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vU9MZJTJrn",
"tKrjGzwFlG",
"rVgAwISZEc",
"p22ns2PGXL",
"bmSklpK6ql",
"Vogn3m0X8k",
"TxYwkAozHE",
"SzwDWNk9mm",
"SjdXIjcMSK",
"QSJ3i55RSA",
"Q0qSDjDKFB",
"OFpDytuNrn",
"O616rJbFGC",
"Mx2mdW1S7w",
"J5RIpk1kRO",
"IZadNzBSrz",
"I9ci5YPEt6",
"FoSeLiAGJ4",
"EbbO7JtEVt",
"5n0TCCUVXd",
"4RlZQQvMy3"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1730602794236,
1732553955309,
1732725994866,
1731948660172,
1731948837590,
1732724994458,
1732554165862,
1730196193551,
1734792602517,
1730649250150,
1732554029079,
1732725420305,
1731948875975,
1732642354692,
1731948819941,
1732726237962,
1730731401212,
1737523682544,
1732652546450,
1731948906400,
1731948713767
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5076/Reviewer_QiUV"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Reviewer_Q5VC"
],
[
"ICLR.cc/2025/Conference/Submission5076/Area_Chair_Gnm9"
],
[
"ICLR.cc/2025/Conference/Submission5076/Reviewer_WFmj"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Reviewer_WFmj"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Reviewer_kuLz"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5076/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"This paper addresses the infinite-horizon average reward RMAB problem. It proposed a model predictive control (MPC) policy using a rolling computational horizon of length $\\\\tau$, which achieves a suboptimality gap with order $\\\\mathcal{O}(1/\\\\sqrt{N})$ with $N$ being the number of arms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper presents a novel use of dissipativity for analyzing RMABs, which offers fresh insights into this field. The suboptimality bounds are rigorously derived, showcasing that the MPC-based policy approaches optimality as the number of arms, \\ud835\\udc41, increases.\", \"weaknesses\": \"Though this paper conveys a solid algorithm design and theoretical proof, we must admit that the current submission has significant weaknesses.\\n\\n1. My main concern is that the order of $\\\\mathcal{O}(1/\\\\sqrt{N})$ gap has been a well-known result for a long time, which various types of algorithms can achieve. In particular, the LP-based algorithms in many related works cited by this submission without an additional MPC layer can also achieve this optimality gap. Though the reviewer admits that the proposed MPC layer can bring some advantages, the key reason that this reviewer does not champion this paper is that there is a new work (https://arxiv.org/pdf/2410.15003) recently has pushed the gap to the order of $\\\\mathcal{O}(1/N)$ by using a diffusion approximation technique. I understand that the authors submitted their work earlier than this recent work and may not be aware of this new work, the main concern always holds as the proposed MPC-based algorithm does not improve the well-known optimality gap.\\n\\n2. Resolving the LP at each time step is not really a novel idea, which can be found in multiple works in related work cited by this submission. Though controlling $\\\\tau$ is new, resolving LP at each time step causes a significant amount of computational complexity. Aligned with my first concern, why do we need such a complex algorithm that does not even improve the theoretical guarantee, which is claimed as a main contribution in this work?\\n\\n3. In many real-world applications that can be modeled as an RMAB framework, the underlying MDP for each arm may not be known in advance. My question is how can we leverage the proposed MPC-based algorithm for such a setting? \\n\\n4. This paper considers a homogeneous arm setting. The reviewer quite doubts the scalability of the proposed algorithm when arms are heterogeneous and the number of arms is very large.\", \"questions\": \"Please refer to **Weaknesses**.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer kuLz,\\nWe thank you for your review. We hope that our response to your concerns have been satisfactory. As the time for open discussions are coming to a close, in case there are more clarifications to be added please let us know. We hope that we can address any further clarifications that need to be made.\"}",
"{\"title\": \"Revisions based on your review\", \"comment\": \"Dear Reviewer WFmj,\\n\\nApart from the typos and sentence sculpting based on your recommendations we have also added a few lines of clarification on page 3 and page 6. Please let us know if others are found. More importantly, as mentioned in the general comments we added a paragraph highlighting our own contributions and its place in the literature on page 2. Further, we explicitly described the shortcomings of the recent work on diffusion and the $\\\\mathcal{O}(1/N)$ error rate by Yan et al for our own work in the appendix on page 13 of the appendix. \\n\\nThank you\"}",
"{\"title\": \"General answer\", \"comment\": \"We thank the reviewers for their consideration and time. We found these reviews constructive and quite postive (despite what we think is a misunderstanding with Rev WFmj). Before answering the specific comments of each reviewers after each review, we first make a general comment that answers the concern of multiple reviewers about the novelty of our results.\\n\\nBroadly speaking, the literature on heuristics for time-average restless bandits can be decomposed in two kinds of algorithms:\\n- Algorithms that perform well in practice but require UGAP (e.g., Whittle index, the algorithm from Verloop 16)\\n- Algorithms that do not require UGAP but that do not perform well in practice (Hong et al. 23, 24, Avrachenkov 24, Yan 24)\\n\\nThe main contribution of our paper is the proof that a very natural model predictive control (LP-update) provides a best-of-both-worlds solution with minimal assumptions. We are not claiming that this algorithm is new, as the idea of resolving an LP for finite-horizon restless bandit already exists in the literature. Yet, all the papers that proposed to use this idea analyzed the algorithm in the finite-horizon case. The main reason for this is that without the framework of dissipativity, the analysis of the time-average case is hard. Note, in general a finite horizon policy may not even be optimally operated at the fixed point. The use of this framework is one of the key technical novelties of our approach. \\n\\n\\n**Further comparison with related work**: Some reviewers have noted the works of Verloop 2016, Zhang and Frazier 2022 as examples for the LP-based policies that would work in our setting. This is not true unless an additional condition known as Uniform Global Attractor Property (UGAP) is satisfied. The UGAP assumption is extremely difficult to verify and weakening such an assumption has been the focus of much of the recent literature in Restless bandits. Motivated by the idea that the fixed point is the optimal operating point for restless bandit problem, the first paper to break the UGAP requirement was Hong et al (2023) that used the so-called synchronization assumption. Yan 2024 used a reachability condition while Hong et al (2024) used aperiodicity and local attractor conditions, Avrachenkov 2024 used a fluid policy convergence assumption. Please take a look at the related work section for a more careful look at recent literature on these assumptions and their motivation. To the best of our knowledge all these assumptions are on both the system parameters *as well as on the policy used*, more importantly, these assumptions are stronger than the assumptions we use on the parameters.\\n\\nOn the other hand, Section 4.1. shows that a very weak assumption on the system parameters (the weakest to the best of our knowledge) is sufficient to ensure the optimality of the LP-update policy. Our work not only has the weakest assumptions on the system parameter (the assumption on the ergodicity coefficient Assumption 1.) but also makes no assumptions on the policy space, such assumptions are easily verifiable when the system parameters are known. In order to bridge this gap from finite horizon to infinite horizon problems we use dissipativity to show that the equivalent rotated cost minimization problem is monotone increasing but bounded. This critical insight allows us to use the value function instead of proving that our algorithm *steers* the state space towards the fixed point or needing to show that the policies from a finite horizon setting align with the infinite horizon optimal policy.\\n\\nIn conclusion, our result on the asymptotic optimality of the LP-update is actually very surprising since it suggests that a weak ergodicity assumption on the single arm problem without additional assumptions on the policy is sufficient to ensure convergence of the value function. From a more technical perspective, returning to the value function instead of looking at controllability in policy spaces can be very advantageous since the $\\\\arg \\\\max$ function used to find policies typically does not have good continuity properties but the value function retains such properties very well.\\n\\nWe will add these remarks in our main article to make these statements clear to our readers.\"}",
"{\"title\": \"Answre to comments (2/2)\", \"comment\": \"**Rev** (6): *In Section 3.1 after introducing the optimization problem in (8), the authors claimed that this problem is computationally easy to solve. On one hand, indeed this is a linear problem and it is \\\"relatively\\\" easy to solve when the state space and the number of arms is small, given many LP solvers. On the other hand, this claim is not precise, since solving a LP with large-scale parameters/spaces can still be very computationally expensive and take a lot of time in practice. This may be one of the limitations of LP based method for RMAB compared to the Whittle index policy although LP based methods do not require the indexability condition*.\\n\\n**Answer**: We would politely like to disagree with the reviewers claim regarding index policies. Even though a sub-cubic algorithms allows us to resolve indexability and compute the index, it is insufficient to prove the optimal nature of the Whittle's index (or LP-index) policy. As early as 1989, Weber and Weiss showed a counter example where an indexable system gave sub-optimal value under Whittle's index policies. It is critical to note the role of UGAP like additional assumptions in resolving this gap in performance. We do agree with the reviewer that when the appropriate assumptions hold, an indexing policy needs to compute the index once at the beginning and no new computations need to be made except arranging the arms according to the priority order. \\n\\n\\nAs pointed out in the answer to reviewer kuLz, we did some emperical test on the time to solve an LP problem by using the default LP-solver of PuLP (a Python library). Our implementation (that is not specifically optimized) takes roughly $(|S| \\\\tau)^2$ micro-seconds computation to compute a policy at each step. There seems to be a natural trade-off between the strength of assumptions made on the system and the complexity of the algorithm required to achieve optimality both conceptually and computationally.\\n\\n\\n**Rev** (7): *Can you elaborate why the definition of (8) imposes the constraint to be satisfied for each time as claimed in lines 198-199?*\\n\\n**Answer**: Equation (8b) constrains the actions so that equation (4) holds at each time step i.e, $u \\\\in \\\\mathcal{U}(x)$ which means that at most $\\\\alpha$ fraction of arms at each time can be pulled and if we have $x_s$ fraction of arms in state $s$, then no more than $Nx_s$ arms can be pulled from state $s$. This restriction is true for *each time step*.\"}",
"{\"title\": \"Regarding the Revision\", \"comment\": \"Dear Reviewers,\\n\\nIn light of your reviews we have made a few changes to our draft. Firstly, we have gone over and tried to correct typos where we could find them as well as modified some sentences slightly to reduce space consumption or slightly improve the phrasing. Such changes are not highlighted in a different color. Secondly, we have inserted a few major changes, these changes are highlighted in blue in our new draft. Of particular importance is the change made on page 2 which highlights the place of our work in the current literature. This is in accordance to the general comment we made to all the reviewers, the other comparisons can be found in the additional related works section in Appendix A. \\n\\nThank you.\"}",
"{\"comment\": \"Dear Reviewer QiUV,\\n\\nWe would like to thank you for your review, comments and excellent questions. We have tried to answer your questions as well as address your main concern regarding the rate of convergence of the optimal algorithm for the infinite horizon problem. As the time for open discussions are coming to an end, we would like to heavily encourage you to take a look at our response and let us know if there are any further questions that we may address. We would also like you to reconsider your score as we believe that your main concerns regarding our paper are unfounded. If there is any clarification that we could add, please ask.\"}",
"{\"summary\": \"This paper proposes a new way to solve restless bandit: putting both the state information and the action information into a continuous vector space, and do linear programming to maximize the reward in the next $\\\\tau$ rounds. When we look forward $\\\\tau$ steps with a large enough $\\\\tau$, this algorithm can find near optimal solutions. The authors show that normally the suboptimality gap is $O(\\\\sqrt{1/N})$, but with certain conditions, it can be reduced to be $O(\\\\exp(-cN))$, where $N$ is the number of arms and $c$ is some constant. As for the gap corresponding to $\\\\tau$, they also show it is about $O({1\\\\over \\\\tau})$. In experiments, it is shown that choosing $\\\\tau$ as a small value (e.g., 10) leads to very good performance, indicating the efficiency of model predictive control.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The analysis shows a good suboptimality gap.\", \"The algorithm is quite efficient to implement.\", \"The experiment results are also convincing.\"], \"weaknesses\": [\"The author assumes that all the arms are statistically identical. Is this a common assumption in restless bandits? I think there are many cases that the arms are not identical. It seems that your algorithm cannot be adapted to this setting easily (e.g., if all the arms have distinct transitions and rewards)?\"], \"questions\": [\"Are there any comparison on the running time of different algorithms, and your algorithm with different $\\\\tau$?\", \"Are there any experiment on real-data?\", \"In which part of the proof shows that $\\\\tau(\\\\epsilon) = O({1\\\\over \\\\epsilon})$?\", \"=========After Rebuttal=========\", \"Thanks for your reply, I do not have other questions.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This is a borderline paper. Overall, the reviewers are critical of various aspects of the paper. Most notably about the somewhat incremental novelty and computational scalability issues. I therefore believe that a major revision of the paper is necessary. The reviews contain a variety of suggests of how to improve and revise the paper.\", \"additional_comments_on_reviewer_discussion\": \"There was a significant discussion among reviewers and between reviewers and authors.\"}",
"{\"summary\": \"This paper studied the discrete time infinite horizon average reward restless markovian bandit (RMAB) problem and focused on the asymptotic optimality in this setting. In particular, a model predictive control (MPC) based non-stationary policy with a rolling computational horizon $\\\\tau$ is proposed and its sub-optimality gap is presented. The performance of this policy is also evaluated via simulations.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"RMAB has been extensively studied in recent years, from both the offline and online settings. This paper investigates the fundamental property of RMAB (e.g., the asymptotic optimality, optimality gap) in the offline setting. It is a challenging and interesting problem.\", \"A model predictive control (MPC) based non-stationary policy has been developed and theoretically analyzed, i.e., its optimality gap is characterized in terms of the number of arms $N$.\", \"Some experimental results were presented to validate the performance of this MPC based policy and its comparison with baselines.\"], \"weaknesses\": [\"The asymptotic optimality performance of RMAB has raised many attentions in recent years. Despite that this paper proposed a policy by leveraging the MPC idea, it is hard to identify the technical novelty from the perspectives of algorithm design and technical proofs and results (See questions below). This paper heavily relies on the previous works such as Gast et al. 2023 a,b.\", \"The simulations are rather weak in both the settings and the baseline methods considered.\", \"This paper in general is poorly written with many typos, broken sentences and abused notations.\"], \"questions\": [\"On one hand, the LP based relaxation has been extensively used in the RMAB literature, such as Verloop 2016, Zhang and Frazier 2022. On the other hand, the randomized rounding procedure is almost the same as that in Gast et al. 2023a and a finite-horizon MPC algorithm (LP-update policy) was proposed in Gast et al. 2023a,b. It is more like a straightforward extension. From the algorithmic perspective, can you more explicitly state what you see as the key novel aspects of the MPC based algorithm compared to prior work, particularly Gast et al. 2023a,b.?\", \"The first result in Section 4.1. This is not surprising and it is commonly known in the RMAB literature that LP-based method for RMAB is provably asymptotically optimal. Indeed, the LP based method has been leveraged to design index policy for RMAB problem without the hard-to-verify indexability condition as required by the Whittle index policy, and such a LP-based method to design index policy is provably asymptotically optimal as shown in the literature, e.g., Verloop 2016 is one of the first works in this domain. Can you discuss how your result in Section 4.1 advances the state-of-the-art beyond what was already known from works like Verloop 2016? Are there any aspects of your analysis or bounds that are novel or improved?\", \"The second result in Section 4.2. Likewise, the proof is directly from Gast et al. 2023a and Hong et al. 2024a. For both results, can your clarify exactly how the use of dissipativity differs from or improves upon previous approaches? can you discuss any limitations of previous methods that your approach overcomes?\", \"In practice, how to determine how many arms to pull at each time, given that $\\\\alpha N$ may not be an integer? If it is not an integer (may consider as an average constraint, there is no need to design an index policy).\", \"-Equation (3) itself is not a RMAB problem. It should be properly defined with the budget constraint to be satisfied at teach time. It may be better to rigorously define (3)\", \"Many typos in the paper, just to name a few here: line 120, \\\"the budget constraint, $\\\\alpha$\\\", line 146, $\\\\boldsymbol{x}_s$ is not defined. line 144, since $u(s,a)$ is denoted as the empirical distribution of the state-action pairs $(s,1)$, why not just express it as $u(s,1)$ since the action is fixed.\", \"In Section 3.1 after introducing the optimization problem in (8), the authors claimed that this problem is computationally easy to solve. On one hand, indeed this is a linear problem and it is \\\"relatively\\\" easy to solve when the state space and the number of arms is small, given many LP solvers. On the other hand, this claim is not precise, since solving a LP with large-scale parameters/spaces can still be very computationally expensive and take a lot of time in practice. This may be one of the limitations of LP based method for RMAB compared to the Whittle index policy although LP based methods do not require the indexability condition.\", \"Can you elaborate why the definition of (8) imposes the constraint to be satisfied for each time $t$ as claimed in lines 198-199?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer WFmj,\\n\\nWe would like to thank you for your thorough review of our paper. We encourage you to take a careful look at our responses since it seems there are a number of misunderstandings regarding our technical contributions as well as its place in the current literature of restless bandits. In particular we encourage you to take a look at our response to point 2 and 4 in our rebuttal. As the time for open discussions are coming to an end, we hope that you can respond to our replies or provide more points that might need to be addressed in our paper. We are hoping that our thorough response may have cleared some of your concerns and would like to encourage you to please reconsider your score. If there is any clarification that we could add, please ask.\"}",
"{\"title\": \"Specific revisions based on your review\", \"comment\": \"Dear Reviewer kuLz,\\n\\nIn section 6, page 9, we added a paragraph explaining both the LP-priority as well as the FTVA algorithm which we used to compare our work. As mentioned in the general comments we highlighted the main contributions on page 2. \\n\\nThank you\"}",
"{\"comment\": \"Our main comments concern question (2) below for which we do not agree with the critic.\\n\\n**Rev** (1) My main concern is that the order of gap has been a well-known result for a long time, which various types of algorithms can achieve. In particular, the LP-based algorithms in many related works cited by this submission without an additional MPC layer can also achieve this optimality gap. \\n\\n**Answer** We agree that we do not improve on the $O(\\\\sqrt{N})$ in the general case. Yet, we provide an algorithm that has many advantages:\\n- It is remarkably simple and is very easy to implement without requiring difficult to tune parameters.\\n- It has the same $O(\\\\sqrt{N})$ guarantee (that is tight for our benchmark) in what seems the most general setting so far.\\n- It performs extremly well in practice.\\n\\n**Rev** (2) *Though the reviewer admits that the proposed MPC layer can bring some advantages, the key reason that this reviewer does not champion this paper is that there is a new work (https://arxiv.org/pdf/2410.15003) recently has pushed the gap to the order $\\\\mathcal{O}(1/N)$ of by using a diffusion approximation technique. I understand that the authors submitted their work earlier than this recent work and may not be aware of this new work, the main concern always holds as the proposed MPC-based algorithm does not improve the well-known optimality gap.*\\n\\n**Answer** We do not agree with this criticism, for multiple reasons:\\n1. First, as pointed out by the reviewer, the paper https://arxiv.org/pdf/2410.15003 appeared after our submission. Hence, this paper should not be considered as related work. \\n2. Second, even if it is, the problem studied in https://arxiv.org/pdf/2410.15003 is that of a **degenerate finite-horizon** problem. Our seting (average reward) concerns the infinite horizon problem. These two problems are not equivalent, in fact, in the absence of further assumptions (for e.g. UGAP) or our own proof techniques, the divergence of the finite horizon solution can be order exponential in the horizon (see for e.g. the value of the Lipschitz constant in Gast 2023 b). A natural follow up question might be, can Yan's method be appended to our own to achieve the same results? The answer to this is quite unclear: Yan et al's work requires a second $H$ horizon stochastic program solution, showing that such a solution satisfies the dissipativity property is highly non-trivial. Therefore, if reviewer's main concern is that an $\\\\mathcal{O}(1/N)$ algorithm has already been discovered, they can rest assured that this does not hold under the infinite horizon average reward setting. We encourage the reviewer to reconsider their score.\\n\\n**Rev**: (3) *Resolving the LP at each time step is not really a novel idea, which can be found in multiple works in related work cited by this submission. Though controlling is new, resolving LP at each time step causes a significant amount of computational complexity. Aligned with my first concern, why do we need such a complex algorithm that does not even improve the theoretical guarantee, which is claimed as a main contribution in this work?*\\n\\n**Answer**. We think that the *beauty* of our solution is that ``it suffices to solve the natural LP at each time to obtain asymptotic optimality in the most general setting''. We refer to our answer to point (1) and our general comment on the novelty of our result. \\n\\n**Rev** (4): *In many real-world applications that can be modeled as an RMAB framework, the underlying MDP for each arm may not be known in advance. My question is how can we leverage the proposed MPC-based algorithm for such a setting?*\\n\\n**Answer** We thank the reviewer for raising an interesting question. Our results can be used in model based reinforcement learning algorithms. For example: by using optimism in combination with our results on finite time horizon problems to approximate an equivalent cost function. It should be noted that we have shown the asymptotic convergence of the finite horizon rotated cost problem to the infinite horizon problem in our proof of Theorem 4.1 using dissipativity. This is an interesting line of future work which we intend to explore. \\n\\n**Rev** (5): *This paper considers a homogeneous arm setting. The reviewer quite doubts the scalability of the proposed algorithm when arms are heterogeneous and the number of arms is very large.*\\n\\n**Answer**: The reviewer raises another very interesting question. We would like to politely disagree with regards to the scalability issue in the heterogeneous case. We strongly suspect that our solutions can be used to come up with index based policies for the heterogeneous case but we do admit that we do not, as of this moment have a proof for this problem. This is a direction we are currently exploring.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for the rebuttal which addressed some of my questions. I respectfully disagreed with the authors on the computationally efficient of the algorithms. No matter using the Python solver PuLP or the solvers from Gurobi, when the state space is large, it is known that it take \\\"long\\\" time to solve the problem. For many RMAB applications e.g., cloud computing, resource allocation, healthcare, the state space is often large in practice. The reviewer acknowledged some technical contributions in the paper, however, strongly doubted the importance of such results or the benefits to the community of using RMAB framework to solving real-world problems due to the computational complexity of the solutions.\\n\\nNote that ICLR allows paper revisions. However, the reviewer did not see any effort by the authors to improving the paper given that there are many typos and many part are poorly written. How could we believe the statement \\\"will make an effort to correct them.\\\"\\n\\nIn addition, I fully agree that the paper https://arxiv.org/pdf/2410.15003 appeared after the ICLR deadline, and this paper should not be criticized for not citing it. Once again, ICLR allow revisions, and the revisions should discuss related work properly.\"}",
"{\"title\": \"Answer to comments (1/2)\", \"comment\": \"We would like to emphasize that **some of the criticisms made are not correct**, and in particular the points 2 and 4 below.\\n\\n**Rev** (1): *On one hand, the LP based relaxation has been extensively used in the RMAB literature, such as Verloop 2016, Zhang and Frazier 2022. On the other hand, the randomized rounding procedure is almost the same as that in Gast et al. 2023a and a finite-horizon MPC algorithm (LP-update policy) was proposed in Gast et al. 2023a,b. It is more like a straightforward extension. From the algorithmic perspective, can you more explicitly state what you see as the key novel aspects of the MPC based algorithm compared to prior work, particularly Gast et al. 2023a,b.?*\\n\\n**Answer**: We thank the reviewer for the comment. The algorithm itself has been well established, the novelty lies in proving that a finite horizon LP-update policy returns an asymptotically optimal solution to an infinite horizon problem under minimal assumptions. Please take a look at the general comment for more details.\\n\\n**Rev** (2) *The first result in Section 4.1. This is not surprising and it is commonly known in the RMAB literature that LP-based method for RMAB is provably asymptotically optimal. Indeed, the LP based method has been leveraged to design index policy for RMAB problem without the hard-to-verify indexability condition as required by the Whittle index policy, and such a LP-based method to design index policy is provably asymptotically optimal as shown in the literature, e.g., Verloop 2016 is one of the first works in this domain. Can you discuss how your result in Section 4.1 advances the state-of-the-art beyond what was already known from works like Verloop 2016? Are there any aspects of your analysis or bounds that are novel or improved?*\\n\\n**Answer**: We do not agree with this criticism by the reviewer. Yes, Verloop 2016 proposes a solution that does not require indexability, but this paper requires the condition ``UGAP'' that is known to be very difficult to verify. Moreover, Verloop 2016 does not provide a rate of convergence. See our \\\"general comments\\\" for more details. \\n\\n**Rev** (3): *The second result in Section 4.2. Likewise, the proof is directly from Gast et al. 2023a and Hong et al. 2024a. For both results, can your clarify exactly how the use of dissipativity differs from or improves upon previous approaches? can you discuss any limitations of previous methods that your approach overcomes?*\\n\\n**Answer**: With regards to Section 4.2 we note that due to the results from section 4.1 and continuity of the rotated cost function, we can conclude that the LP-update policy does *steer* the state towards the fixed point *only by assuming the uniqueness of the fixed point*. Hence, while parts of the proof are adapted from Gast et al. 2023a and Hong et al. 2024a, the two main steps that allow us to conclude that the state will lie in a local ball around the fixed point after a finite time are entirely an outcome of our dissipativity idea. This should be compared to the proofs in Gast et al. 2023a which *assumed UGAP* or Hong et al. 2024a which *assumed local stability of the policy* and aperiodicity of the transition kernel *induced by the policy* to make this claim. \\n \\n**Rev** (4): *In practice, how to determine how many arms to pull at each time, given that may not be an integer? If it is not an integer (may consider as an average constraint, there is no need to design an index policy). -Equation (3) itself is not a RMAB problem. It should be properly defined with the budget constraint to be satisfied at each time. It may be better to rigorously define (3)*\\n\\n**Answer**. We think that there is a missunderstanding: When looking at the randomized rounding policy (in our appendix), it clearly uses the floor function whenever $\\\\alpha N$ is not an integer and this is reflected in the bound for Theorem 4.1. The reviewer should note that in the paragraph above equation (2) we clearly restrict our policy space to policies that are stationary and satisfy the budget constraint of pulling at most $\\\\alpha N$ arms. This definition of the policy space $\\\\Pi^{(N)}$ characterizes the Restless bandit problem. \\n\\n**Rev** (5): *There are many typos in the paper.*\\n\\n**Answer**: We thank the reviewer for pointing out the typos and will make an effort to correct them.\"}",
"{\"title\": \"Revisions based on your reviews\", \"comment\": \"Dear Reviewer QiUV,\\n\\nWe have highlighted our main contributions on page 2. We have also explicitly described the shortcomings of Yan et al's recent work in the average reward setting on page 13 of the appendix. \\n\\nThank you\"}",
"{\"summary\": \"The paper addresses the discrete time infinite horizon average reward Restless Markovian Bandit (RMAB) problem with a Model Predictive Control (MPC) approach. The proposed MPC algorithm achieve the suboptimality gap $O(1/\\\\sqrt{N})$ with a minimal set of assumptions, and can achieve exponential convergence under local stability conditions. Moreover, the MPC algorithm works well in practice with SOTA computational complexity.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe paper introduces a novel application of MPC to the RMAB problem which achieve suboptimal gap $O(1/\\\\sqrt{N})$ with a minimal set of assumptions and exponential convergence rate under local stability conditions.\\n2.\\tThe proposed MPC approach reduces the computational burden associated with solving RMAB problems and perform well in numerical experiments.\\n3.\\tThis paper presents an interesting framework based on dissipativity, and provides theoretical analysis in this paper.\", \"weaknesses\": \"1.\\tIn Section 6, the algorithm LP-priority is not formally introduced. It is confusing to distinguish between the LP-update and LP-priority algorithms due to the lack of a clear definition.\\n2.\\tWhat are the main technical contributions of the theoretical analysis compared to existing works? I suggest highlighting the novelty and primary contributions of the theoretical analysis more clearly.\\n3.\\tIn Line 88, the paper states, \\\"It performs well both in terms of the number of arms N as well as the computational time horizon T, beating state-of-the-art algorithms in our benchmarks.\\\" However, in the numerical experiments section, the authors did not compare the computational efficiency of the proposed MPC approach with existing algorithms. Moreover, it would be beneficial to provide a more rigorous discussion on why the MPC approach reduces the computational burden.\", \"questions\": \"1.\\tIn Section 6, the algorithm LP-priority is not formally introduced. It is confusing to distinguish between the LP-update and LP-priority algorithms due to the lack of a clear definition.\\n2.\\tWhat are the main technical contributions of the theoretical analysis compared to existing works? I suggest highlighting the novelty and primary contributions of the theoretical analysis more clearly.\\n3.\\tIn Line 88, the paper states, \\\"It performs well both in terms of the number of arms N as well as the computational time horizon T, beating state-of-the-art algorithms in our benchmarks.\\\" However, in the numerical experiments section, the authors did not compare the computational efficiency of the proposed MPC approach with existing algorithms. Moreover, it would be beneficial to provide a more rigorous discussion on why the MPC approach reduces the computational burden.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Thank you for engaging in the discussion\", \"comment\": \"About the computational efficiency: we agree with you that the solution that we propose, and that consists in resolving a problem at each time, will be slower than an index-based method. Our implementation suggest that this takes (T\\\\tau)^2 micro-seconds at every decision epoch, which means that one can hardly imagine to use this solution for a problem of dimension more than 100 or 1000. Note that for Whittle index, it is possible to compute the index of models of up to 1000 or 10000 states but not really more than that unless a close form exist. The advantage of index policies is that the index have to be computed only once.\\n\\nThat being said, the main contribution of our paper is theoretical and to show that the very natural LP-update framework can be shown to be asymptotically optimal. We do so by using what we think is an interesting framework for the community. From a practical point of view, we are not claiming that everyone should use an LP-update like policies for bandits. Our message is rather that, for difficult problems where index fail, an LP-update approach can be valuable as it gives a much better performance than everything else.\", \"about_the_revision\": \"we are currently working on an updated version but we did not have the time to converge yet. We wanted to avoid uploading too many versions and only update one that we think is ready. We will upload it soon.\"}",
"{\"comment\": \"We would like to thank the reviewer for the encouraging comments and score.\\n \\n**Rev** *Are there any comparison on the running time of different algorithms, and your algorithm with different $\\\\tau$?*\\n\\nIn the submitted version of the paper, we do not compare the running time of our algorithm as a function of the parameters. One of the reasons for that our code uses a non-optimized Python implementation and we strongly believe that they are a lot of optimization that could be implemented to make our code run faster. Still, a quick benchmark seem to indicate that the computational complexity of our implementation grows around $\\\\tau^2$: to compute one point of control, it takes a few milliseconds to solve the problem when $\\\\tau=10$ and around $0.8$ seconds when $\\\\tau=100$.\\n\\n**Rev**: *Are there any experiment on real-data?*\\n\\n**Anwer** As our paper is theoretical by nature, we do not made experiments on real-world data. Note that such experiments are rather uncommon in papers presenting theoretical results on restless bandit problems.\\n\\n**Rec**: *In which part of the proof shows that $\\\\tau(\\\\epsilon) = \\\\mathcal{O}(1/\\\\epsilon)$?*\\n\\n**Answer** Note that the asymptotic convergence result stems from $\\\\sum_{t = 1}^{\\\\infty} L_{t}(x) - L_{t - 1}(\\\\Phi(x, u)) < \\\\infty$ is bounded and $L_{t}(x) \\\\geq L_{t - 1}(x)$. This means we have an infinite sum of non-negative numbers, hence, the $t^{th}$ term must \\\"on average\\\" fall faster than order $1/t$. A more technical way of writing the result may be to say that for a sequence of $\\\\epsilon_i \\\\to 0$, picking $\\\\tau(\\\\epsilon_i) = C/\\\\epsilon_i$ results in $L_{\\\\tau(\\\\epsilon_i)}(x) - L_{\\\\tau(\\\\epsilon_i) - 1}(\\\\Phi(x, u))$ being \\\"frequently in\\\" a ball of radius $\\\\epsilon_i$ but such a statement would be needlessly difficult to read for an uninitiated reader.\"}",
"{\"comment\": \"**Rev**: *In Section 6, the algorithm LP-priority is not formally introduced. It is confusing to distinguish between the LP-update and LP-priority algorithms due to the lack of a clear definition.*\\n\\n**Answer**: We thank the reviewer for the comment, we will make changes to our simulation results to address their comment. \\n\\n**Rev**: *What are the main technical contributions of the theoretical analysis compared to existing works? I suggest highlighting the novelty and primary contributions of the theoretical analysis more clearly.*\\n\\n**Answer**: Please refer to the general comment regarding our contributions. One of our the main technical novelties is the use of the dissipativity framework to analyze model predictive control in the restless bandit context.\\n\\n**Rev**: *In Line 88, the paper states, ``It performs well both in terms of the number of arms N as well as the computational time horizon T, beating state-of-the-art algorithms in our benchmarks.\\\" However, in the numerical experiments section, the authors did not compare the computational efficiency of the proposed MPC approach with existing algorithms. Moreover, it would be beneficial to provide a more rigorous discussion on why the MPC approach reduces the computational burden.*\\n\\n**Answer:** We agree that we do not discuss explicitly the computational complexity of our solution. The main reason for not providing measure of the time complexity is that we use a simple python implementation that we did not try to optimize. Yet, to give an order of magnitude, in all of the examples studied, the time to compute one control is around a few tens of milliseconds.\"}"
]
} |
ETokBVXrbC | Hardware Simulation for Analog Ultrasonic 2D Convolution | [
"Juneho Hwang",
"Xiangyu Chen",
"Travis Zhang",
"Luis Amaro",
"Kilian Q Weinberger",
"Peter Doerschuk",
"Amit Lal"
] | As its name suggests, the convolution operator is the basis and an essential component in Convolutional Neural Networks (CNNs). At the moment, modern CNN architectures rely heavily on parallel computation using GPUs and CPUs to perform many convolutions as fast as possible. However, the performance of computing CNNs is reaching its limit as the scaling of transistors approaches its size limits. The convolutional theorem suggests the possibility of using acoustic waves to efficiently perform the convolution operations through Fourier transforms in analog. This promises hardware that would be several orders of magnitude faster than existing silicon-based approaches. However, to date, nobody has shown the practical feasibility of such an approach. In this paper, we describe the first physics-based simulator for Ultrasonic Fourier Transform Convolutions (UFTC). By exploiting the diffraction nature of the waves, the Fourier transforms can be computed in the time it takes to propagate an ultrasonic wavefront. Our results show that ultrasonic computation could drastically improve the performance of CNNs by 12-458x FLOPS reduction and 1.3-4x computation speedup without loss of prediction accuracy. | [
"simulation",
"convolution",
"ultrasonic",
"hardware",
"accelerator",
"Fourier transform",
"acoustic",
"wave"
] | https://openreview.net/pdf?id=ETokBVXrbC | https://openreview.net/forum?id=ETokBVXrbC | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"q1KvvrpG7W",
"n04ngBJYLn",
"NwEeGQpjtf",
"AaOl7GLUYU",
"0tTHSY1ov0"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1730788927641,
1730353778988,
1737582804440,
1730284075520,
1730734600116
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7590/Reviewer_ZQHf"
],
[
"ICLR.cc/2025/Conference/Submission7590/Reviewer_Tmh6"
],
[
"ICLR.cc/2025/Conference/Submission7590/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7590/Reviewer_5grG"
],
[
"ICLR.cc/2025/Conference/Submission7590/Reviewer_wk4z"
]
],
"structured_content_str": [
"{\"summary\": \"The author proposes a simulation framework for an ultrasonic device that can compute convolutions in CNN models with 12-458x FLOPS reduction and 1.3-4x speedup through the Fourier transform using the intrinsic characteristics of ultrasonic waves in CMOS/piezoelectric arrays.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Strength:\\n1.\\tIt explores an ultra-sonic implementation of Fourier-domain convolution and demonstrates some FLOPS reduction and speedup.\", \"weaknesses\": \"Weaknesses\\n\\n1.\\tHow is the complex number in the Fourier domain handled in the real hardware?\\n\\n2.\\tHardware nonideality is not considered in the modeling and evaluation. There is significant difference between simulation and measurement as shown in Fig 6.\\n\\n3.\\tThe peak power of the proposed system is 3630W, which raises concerns about practicality.\\n\\n4.\\tThe novelty is limited. Fourier-domain convolution is not invented here. Using wave diffraction, e.g., 4-f optical system, for convolution is a well explored literature.\\n\\n5.\\tThe accuracy drop is huge and unacceptable in the application demonstrated. It is hard to justify why this method is practically useful with such a large degradation.\\n\\n6.\\tThe paper claims a simulation framework for UFTC. What is new in this framework? Any special kernel implementation and optimization to speed up the training?\", \"questions\": \"Weaknesses\\n\\n1.\\tHow is the complex number in the Fourier domain handled in the real hardware?\\n\\n2.\\tHardware nonideality is not considered in the modeling and evaluation. There is significant difference between simulation and measurement as shown in Fig 6.\\n\\n3.\\tThe peak power of the proposed system is 3630W, which raises concerns about practicality.\\n\\n4.\\tThe novelty is limited. Fourier-domain convolution is not invented here. Using wave diffraction, e.g., 4-f optical system, for convolution is a well explored literature.\\n\\n5.\\tThe accuracy drop is huge and unacceptable in the application demonstrated. It is hard to justify why this method is practically useful with such a large degradation.\\n\\n6.\\tThe paper claims a simulation framework for UFTC. What is new in this framework? Any special kernel implementation and optimization to speed up the training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces a Hardware Simulation for Analog Ultrasonic 2D Convolution, proposing an ultrasonic Fourier transform-based hardware architecture to accelerate CNN computations. By simulating an analog ultrasonic wave propagation system, the authors aim to perform convolutions via Fourier transforms in an analog format, potentially reducing computational complexity and energy consumption compared to traditional digital methods. The approach leverages the convolution theorem and ultrasonic wave diffraction to execute Fourier transforms in the analog domain. Results from the hardware simulation show a potential FLOPS reduction of 12-458\\u00d7 and a computation speedup of 1.3-4\\u00d7, demonstrating competitive accuracy for common CNN models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed approach presents a novel application of ultrasonic wave physics to CNNs, offering an energy-efficient alternative to digital accelerators. By leveraging the convolution theorem and performing Fourier transforms in the analog domain, the method reduces computational load and could provide significant benefits for applications requiring low-power, high-speed computing.\\n\\nThe experimental results are compelling, with up to 458\\u00d7 FLOPS reduction and competitive CNN accuracy across popular models like ResNet and DenseNet. This demonstrates the method\\u2019s potential in practical CNN applications, especially where energy efficiency is critical, such as edge and mobile devices.\\n\\nThe paper makes a valuable interdisciplinary connection between ultrasonic hardware and deep learning, which could inspire future research in analog computation for machine learning. The authors also present a realistic hardware simulation model, addressing practical parameters like focal length and pixel size, which strengthens the relevance and feasibility of the proposed system.\", \"weaknesses\": \"The method\\u2019s reliance on specific hardware parameters, such as ultrasonic wave speed and focal length, might limit its adaptability across different types of hardware or more complex neural network architectures. Further investigation into the system's flexibility could enhance its applicability.\\n\\nAlthough the performance gains are promising, the use of analog hardware raises potential concerns about scalability and integration with current digital infrastructures. Additional discussion on interfacing this approach with digital processing units could provide more clarity on its practical deployment.\\n\\nThe accuracy drop noted in some models suggests that the ultrasonic convolution method may need further optimization for complex CNN architectures. Clear guidelines on tuning for different CNN structures could help improve performance consistency across a wider range of applications.\", \"questions\": \"Could the authors discuss potential methods for integrating the ultrasonic analog architecture with digital processing units to create a hybrid system?\\n\\nGiven the noted accuracy drop in more complex models, would it be feasible to develop a specialized tuning procedure to enhance performance consistency?\\n\\nSince the system\\u2019s design is reliant on specific hardware parameters, what challenges might arise if this method were adapted for different hardware setups?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": [\"The paper aims to provide insights into potential hardware for accelerating CNNs based on an ultrasonic hardware-based Fourier transform. The main contribution of the study is to demonstrate the effectiveness of such hardware using simulation. The following points summarize the findings:\", \"Convolution can be computed by using a Fourier transform, inverse Fourier transform, and dot-wise multiplication operation.\", \"Building hardware for performing Ultrasonic Fourier Transform Convolutions (UFTC) in the analog domain can calculate Fourier and inverse Fourier transforms with low power and higher speed.\", \"Creating a simulator as a proof of concept for such hardware, which can evaluate the impact on end-to-end performance of CNN models, is highly desirable.\", \"The paper describes the details of such a simulator and presents results on multiple CNN architectures.\"], \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The evaluation of new hardware techniques aimed at accelerating neural networks is a highly important research direction.\", \"The paper explores the novel idea of using ultrasonic waves for Fourier-based convolution.\"], \"weaknesses\": [\"The paper is not written coherently and lacks clarity about contribution. Please clarify the points added to \\\"Questions\\\" sections. It is also be advisable to revisit the points to make paper more readable.\", \"the paper presents simulator as the major contribution. However, the description of simulator and its validation is missing. It will be helpful to build confidence in simulator's efficacy to represent the real hardware.\"], \"questions\": [\"Line 218-220 mentions \\\"The UFT wasimplemented, as shown in Figure 3, by directly calculating the integral for an input image to the FT plane. In order to compute the UFT such that its efficacy could be tested in the DNN models, a linear model of the UFT was developed.\\\"\", \"> What is the impact of using a linear model of UFT? Does it preserve the accuracy and performance of the model?\", \"Line 241-246: What is the significance of removing the quadratic term? Does the simulation system assume the lenses are placed at the focal length? This point is not clear from the text.\", \"Section 3.4: The section introduces several specific numerical values. Are these values used as examples for demonstration, or do they hold specific significance? For example, is the size of the single pixel transducer, 5 \\u03bcm, critical?\", \"Section 3.4: In the expression, FLOP2DFFT is replaced with 5.N^2 log N^2 without explanation. Please provide clarification.\", \"Line 441-446 : \\\"The difference of 240k (290.6k-50.6k) FLOPS was assigned to the ultrasonic diffraction convolution and is assumed to be outside of the A6000 GPU calculation. Also, the FLOPS reduction was calculated by dividing 290.6k by 50.6k.\\\"\", \"> Please share rationale regarding the assumptions.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents a novel approach to accelerating Convolutional Neural Networks (CNNs) by leveraging ultrasonic waves for convolution operations. This work explores the use of ultrasonic Fourier Transform Convolutions (UFTC), a method that replaces digital convolution operations with an analog Fourier transform computed by propagating ultrasonic waves. A physics-based simulator for this purpose is introduced, showing significant improvements in FLOPS reduction (up to 458\\u00d7) and computation speedup (1.3-4\\u00d7) compared to conventional methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Contribution: The paper proposes an innovative analog computing solution that offers substantial computational benefits, addressing fundamental limitations in current silicon-based architectures.\", \"simulation_based_validation\": \"This work demonstrates the feasibility of UFTC using a well-constructed simulation, which serves as an essential step toward physical implementation.\", \"detailed_theoretical_framework\": \"The authors thoroughly explain the theoretical underpinnings of their approach, particularly how ultrasonic waves can compute Fourier transforms and convolve inputs with minimal latency.\", \"outcome\": \"The reported reduction in computation time and FLOPS demonstrates the potential for large-scale deployment, especially in resource-constrained environments.\", \"weaknesses\": \"My main two concerns are listed below:\", \"limited_hardware_implementation\": \"While the simulation results are promising, the actual hardware demonstration may differ from simulations, which might lead to performance discrepancies in real-world applications.\", \"accuracy_trade_offs\": \"The accuracy reduction (up to 25.7% on certain datasets) raises questions about the generalizability of UFTC, particularly for applications requiring high precision.\", \"questions\": \"There is no clear explanation why there is a significant accuracy drop in some cases in Table 2? Please explain.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
ETX8NTEuCj | On the Interpolation Effect of Score Smoothing | [
"Zhengdao Chen"
] | Score-based diffusion models have achieved remarkable progress in various domains with an ability to generate new data samples that do not exist in the training set. In this paper, we examine a hypothesis that this phenomenon manifests an interpolation effect caused by a smoothing of the empirical score function. Focusing on settings where the training set lies in a one-dimensional linear subspace, we take a distribution-agnostic perspective and study the interplay between score smoothing and the denoising dynamics with mathematically solvable models. We demonstrate how score smoothing can lead to the generation of samples that interpolate among the training data within the subspace while avoiding a full memorization of the training set. | [
"score-based diffusion models",
"score smoothing",
"data interpolation",
"generalization vs memorization",
"subspace recovery"
] | Reject | https://openreview.net/pdf?id=ETX8NTEuCj | https://openreview.net/forum?id=ETX8NTEuCj | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zw9PeGXq0S",
"iKeLbwjlxy",
"HMWnNX00YQ",
"F83WKhhxqV",
"5JUZMDv21o",
"4jQunGP9uv"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision",
"official_review",
"meta_review"
],
"note_created": [
1730236476899,
1730693085499,
1730665608206,
1737524067515,
1731323986049,
1734533799564
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10645/Reviewer_be3W"
],
[
"ICLR.cc/2025/Conference/Submission10645/Reviewer_mS12"
],
[
"ICLR.cc/2025/Conference/Submission10645/Reviewer_hCgf"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission10645/Reviewer_75YM"
],
[
"ICLR.cc/2025/Conference/Submission10645/Area_Chair_fUx7"
]
],
"structured_content_str": [
"{\"summary\": \"This paper investigates the interpolation effect that arises when smoothing the empirical score function (ESF) in score-based diffusion models, specifically exploring its impact on generative sample diversity. The authors focus on a theoretical model where training data lie in a one-dimensional subspace and use a local smoothing approach on the ESF, demonstrating that smoothing allows for interpolation between training points and mitigates memorization. They offer a mathematical analysis under simplified settings and provide numerical illustrations to validate their findings. Their method seeks to clarify why score smoothing may improve generalization in generative models by facilitating smoother transitions within the data's underlying structure.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents a novel perspective on how score smoothing enables generative models to interpolate between data points, avoiding full memorization\\u2014a valuable insight for understanding generalization in diffusion models. Numerical experiments illustrate the model\\u2019s interpolation behavior, validating that a smoothed ESF leads to sample distributions that interpolate among training points. This supports the theoretical analysis and enhances understanding of the smoothing effects on generative dynamics.\", \"weaknesses\": \"1. A typo at line 57: $t$ should be inside $\\\\sqrt{\\\\cdot}$.\\n\\n2. The motivation for the chosen local smoothing technique is not entirely intuitive. While they study an approximation in the $L^2$ sense, it remains unclear why this specific smoothing function is preferable over alternative smoothing methods. The paper acknowledges this limitation but could further clarify. See questions below.\\n\\n3. The relationship between $\\\\kappa$ and $t$ in section 3 is a bit confusing. See questions below.\", \"questions\": \"1. The motivation for the chosen local smoothing technique is not entirely intuitive. The author shows that their smoothed score function is close to the true score in the $L^2$ sense, which is usually assumed for the NNs in practice. If one can further show that 'if two different score functions are both close to the true score in the $L^2$ sense, then the dynamics driven by them are close to each other', then studying a specific smoothed score could directly help us understanding the others.\\n\\n2. The relationship between $\\\\kappa$ and $t$ in section 3 is a bit confusing. In Proposition 2, $\\\\kappa$ is defined by $t$, while in proposition 3, $t$ is restricted in $[0, \\\\kappa]$. In practice, should $t$ be specified first?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This work studies the the interpolation effect of score smoothing and provides numerical experiments.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper considers a simple one-dimensional model to study the smoothed score function. Mathematical properties are derived and numerical experiments are conducted.\", \"weaknesses\": \"Major questions:\\n1. The biggest concern is the gap between the problem that the authors attempt to study and the model proposed in this paper. The starting point is memorization vs. generalization in diffusion models. The authors conjecture the memorization is from the smoothed score function. It raised two natural questions: 1) how do you know the NN-learning score estimator has such smoothing effect in practice? 2) Even though the score estimator used in practice is smoothed, how is this property linked to the generalization, i.e., overcoming the memorization? These two questions are not well-studied in the paper.\\n2. The paper considers a(n) (essentially) one-dimensional model. In this case, the (empirical) score function has a closed-form. However, the authors do not study this score function while turning to its smoothed version. Then I would like to ask, even if we know all the properties of the smoothed score function, how can we have more knowledge about the true score function, even in this simple case?\\n3. The authors conduct numerical experiments to compare three score functions/estimators. However, it is not clear how to interpret the results. The distribution after adding noise must be the same as standard Gaussian is the stationary distribution in all three cases. Then, what is the new information by plotting the density and histograms?\", \"minor_questions\": \"1. Why do you introduce $\\\\hat{x}_t^{(n)}$, $\\\\bar{s}_t^{(n)}$ and $$\\\\widehat{s}^{(n)}_{t, \\\\tau}$$, three versions of the score function? Also, if the target distribution is $p_0^{(n)}$, why to you need a smoothed version $\\\\hat{p}_t^{(n)}$? Please clarify the relation between them and emphasize the necessity of introducing these auxiliary functions/distributions.\\n2. The target distribution studies seems to be relevant to Gaussian mixture models. There are some papers studying learing GMMs using diffusion models, e.g., https://arxiv.org/abs/2404.18869 and https://arxiv.org/abs/2404.18893. Any relation to the literature?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper investigated the smoothing effect in score estimation and presented theoretical analyses on a simplified data model. Numerical results were also presented to support the argument.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-organized and has a clear presentation. The theoretical results on two points and one-dimensional subspace are intuitive and easy to follow.\", \"weaknesses\": [\"It is not clear how the analyses of one one-dimensional subspace can be extended to more practical data.\", \"For example, how can the analyses be connected back to address memorization/hallucination behaviors mentioned in the introduction?\", \"In line 475, I cannot agree the analysis is \\\"distribution-agnostic,\\\" as it still relies on a highly simplified assumption of the practical data.\"], \"questions\": \"More broadly, the smoothing effect can be related to generalization error and the inductive bias of deep neural networks. Could the authors briefly comment on how the analyses can help understand these phenomena?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper theoretically explores the score of variance exploding SDEs (VE-SDEs) to showcase the success in score based diffusion models by arguing that the smoothing of the data-score generating novel samples that interpolates across the training data subspace.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper starts of a simpler and more accessible 1d 2 data points analysis to motivate their insights to the reader before moving onto the more technical results.\\n2. The paper's insights into their smoothed ESF approximation seem valuable to the community. In particular, it shows that the KL regarding a uniform distribution is bounded (unlike if using the ESF directly) and quite neat.\", \"weaknesses\": \"I think the paper could do a better job at motivating their results for a wider audience:\\n\\n1. Equation (8) can and should be defined before Lemma 1 , in Lemma 1 you throw in the smoothed score defined in an implicit way and then you redefine it and label it it equation 8, it would be better to introduce equation 8 first and then use the notation defined in equation 8 to introduce lemma 1. \\n2. It seems plausible that a network would learn a smoothed version of the score (as discussed in limitations), but you never strongly motivate / discuss this, earlier discussion of this would motivate your choice of analysis better. Either way it would be helpful for you to motivate and introduce the smoothed score a bit earlier in the text.\", \"questions\": \"1. Line 057 $\\\\sigma$ should be $t$ instead? otherwise there's no reference to $t$ in the abbreviation/RHS of the equation.\\n2. Whilst big O notation is standard, I think its a bit cleaner and more common to write these results with $\\\\leq$\\n3. As mentioned in the weakness its clear the smoothed score induces a controllable loss, but I think to complete the story the authors need to discuss / motivate why would we learn anything akin to the specific choice of smoothed score used in this work ? I understand this is mentioned in the limitations but maybe something earlier in the introduction would be helpful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper investigates the interpolation effect due to smoothing the score function in diffusion models. The authors consider a rather basic model where the training data is restricted to a 1d subspace, they perform a theoretical analysis (in which the empirical score function has a closed form) and then validate such analysis with numerical results.\\n\\nThe reviewers agree that the topic is interesting, the paper is clear and well written, and the results bring some interesting insights. However, such insights are hindered by the strong assumptions required by the theory (unidimensional data). The gap between the theory and the setting that we authors wish to analyze is quite significant and this constitutes the main weakness of the manuscript, as raised by reviewer mS12. I also think this is major weakness and therefore recommend a rejection at this stage.\\n\\nI do think that the approach of the paper has potential and I would encourage the authors to pursue this line of work, providing a more general analysis and resubmitting an improved version to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"The response of the authors did mitigate some of the concerns raised by the reviewers, but not really the main issue pointed out by reviewer mS12 about the strong assumptions needed by the theoretical analysis.\"}"
]
} |
ETMIPPtJp9 | FiDeLiS: Faithful Reasoning in Large Language Model for Knowledge Graph Question Answering | [
"Yuan Sui",
"Yufei He",
"Nian Liu",
"Xiaoxin He",
"Kun Wang",
"Bryan Hooi"
] | Large language models are often challenged by generating erroneous or `hallucinated' responses, especially in complex reasoning tasks.
To mitigate this, we propose a retrieval augmented reasoning method, FiDeLiS, which enhances knowledge graph question answering by anchoring responses to structured, verifiable reasoning paths. FiDeLiS uses a keyword-enhanced retrieval mechanism that fetches relevant entities and relations from a vector-based index of KGs to ensure high-recall retrieval. Once these entities and relations are retrieved, our method constructs candidate reasoning paths which are then refined using a stepwise beam search. This ensures that all the paths we create can be confidently linked back to KGs, ensuring they are accurate and reliable.
A distinctive feature of our approach is its blend of natural language planning with beam search to optimize the selection of reasoning paths. Moreover, we redesign the way reasoning paths are scored by transforming this process into a deductive reasoning task, allowing the LLM to assess the validity of the paths through deductive reasoning rather than traditional logit-based scoring. This helps avoid misleading reasoning chains and reduces unnecessary computational demand. Extensive experiments demonstrate that our method, even as a training-free method which has lower computational costs and superior generality, outperforms established strong baselines across three datasets. The code of this paper will be released at https://anonymous.4open.science/r/FiDELIS-E7FC. | [
"Large Language Models",
"Knowledge Graph Question Answering",
"Retrieval-Augmented Generation"
] | Reject | https://openreview.net/pdf?id=ETMIPPtJp9 | https://openreview.net/forum?id=ETMIPPtJp9 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vFtj3stiWk",
"qDqjUaSiSt",
"jezGDz0Jgi",
"hkVOWgeuEI",
"Wx4Gnc4dlc",
"W3jvt6me1S",
"VozfzfXdu3",
"TJofMcXKFb",
"QIHCaCLcfF",
"OEOdx10u4t",
"N9K7bevViC",
"N3Uo5zqvcS",
"Mexal3aXvP",
"Gne6ID2dYG",
"C0UPm2QJlb",
"7rkxgZjtjI",
"4r6z3MVn28",
"2nWZmLk7CI",
"01tT1obOEK"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"decision",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732375968720,
1732377127926,
1732519744285,
1732535140517,
1730555314107,
1730391300570,
1737523849031,
1732375945625,
1732376023403,
1734918511906,
1730361742956,
1732670972596,
1732378343398,
1730720294415,
1732536410236,
1732376128099,
1732377191655,
1732513491573,
1732679663122
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7578/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7578/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7578/Reviewer_d3aA"
],
[
"ICLR.cc/2025/Conference/Submission7578/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7578/Reviewer_gxgg"
],
[
"ICLR.cc/2025/Conference/Submission7578/Reviewer_mHFm"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7578/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7578/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7578/Area_Chair_REyR"
],
[
"ICLR.cc/2025/Conference/Submission7578/Reviewer_rRZc"
],
[
"ICLR.cc/2025/Conference/Submission7578/Reviewer_rRZc"
],
[
"ICLR.cc/2025/Conference/Submission7578/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7578/Reviewer_d3aA"
],
[
"ICLR.cc/2025/Conference/Submission7578/Reviewer_mHFm"
],
[
"ICLR.cc/2025/Conference/Submission7578/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7578/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7578/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7578/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"---\\n### **W4 - Clarification of some minor concerns**\\n\\n**(1) Are the reasoning step candidates all from $E_m$ and $R_m$? Does beam search only execute on the candidates?**\\n\\nYes, you are right. All the reasoning step candidates are constructed from $E_m$ and $R_m$ based on the Eq (3). And the beam search process is executed on the reasoning step candidates at each timestamp.\\n\\n**(2) How to obtain the conclusion that the proposed method shows superior efficiency compared to the ToG from Table 6.** \\n\\nWe would like to clarify that Table 6 includes results when replacing Path-RAG with ToG for the retrieval mechanism (labeled as \\u201cw/o Path-RAG using ToG\\u201d). These results show significantly higher average runtime (e.g., 74.26s on WebQSP and 132.59s on CWQ) and increased token usage using ToG compared to our proposed Path-RAG. This directly highlights the efficiency improvements achieved by using Path-RAG.\\n\\nIn addition, on both WebQSP and CWQ, our method consistently achieves lower runtime and token usage while maintaining or improving performance (Hits@1). This demonstrates that our approach streamlines the reasoning process, leading to more efficient use of resources compared to the ToG.\"}",
"{\"comment\": \"We sincerely thank the reviewer for the thoughtful comments. Below, we address each concern in detail:\\n\\n---\\n### **W1: The reviewer is concerned that the novelty of the paper is limited, especially compared with existing retrieval-augmented methods.**\\n\\nWe acknowledge that there are similarities between the proposed method and previous retrieval-augmented methods, especially we both follow the workflow of retrieving some information from external resource and use them to enhance the LLM reasoning. However, we would like to highlight that our contributions should be not considered as incremental efforts for the following reasons:\\n\\n**Paradigm shift from retrieving knowledge facts to reasoning paths from KG.** Unlike existing retrieval-augmented methods that retrieve individual triplets from a knowledge graph (KG) as additional knowledge to support reasoning, our method leverages the structural information of the KG. Specifically, we retrieve reasoning paths\\u2014sequences of interconnected triplets\\u2014directly from the KG to guide LLM reasoning. These reasoning paths provide a more structured and factual basis for reasoning compared to the self-generated chain-of-thought (CoT) by LLMs, ensuring greater factual accuracy. Additionally, the reasoning paths enhance the explainability of the reasoning process, offering a clear rationale for how the final answer is derived.\\n\\nIn addition, our proposed method focus on **two challenges** when retrieving reasoning paths from KG as follows:\\n\\n**(a) Issues of premature stopping or excessive continuation when extending the reasoning paths**: When retrieving reasoning paths from a KG, two critical challenges can arise: premature stopping, where the retrieval process halts before a complete reasoning path is constructed, or excessive continuation, where the path is extended unnecessarily, including irrelevant or incorrect steps. These issues will hinder the performance of LLMs\\u2019 decision making (as shown in the case study from lines 422 to 455). Our proposed method uses deductive reasoning as a clear and objective way to decide when to stop extending reasoning paths. Deductive reasoning involves verifying whether each reasoning step logically follows from the previous steps and the user query, making it more reliable and less ambiguous. This approach not only simplifies decision-making but also reduces bias in the reasoning process, ensuring fairer and more accurate stopping criteria. We would like to highlight that this straightforward shift in controlling the end point of reasoning paths demonstrates promising performance, particularly in cases requiring deeper reasoning steps (e.g., CoT and CL-RT with reasoning depths >3).\\n\\n**(b) Issues of inefficiency in handling large reasoning steps candidates**: In addition, previous work did not consider the retrieval process, which means it considers all neighboring entities/relations during reasoning path extension. This can be particularly problematic for large KGs that have a vast number of neighbors available. It will substantially increase the computational cost for LLM reasoning (as more tokens are considered) and introduces noise from irrelevant nodes and edges, which hampers the effectiveness of subsequent LLM\\u2019s decision making process. In contrast, our method first retrieves entities and relations relevant to the query to construct reasoning path candidates. This allows the LLM to focus on a **smaller, more relevant set of candidates** during decision-making, improving efficiency and reducing the need to filter out irrelevant noise. As demonstrated in our experiments, this retrieval-based approach (path-RAG) consistently achieves better overall performance on different settings (Table 1,2).\\n\\n---\\n### **W2: Why use Path-RAG and what is the sensitivity of the hyperparameter $\\\\alpha$?**\\n\\nPath-RAG is proposed to retrieve entities and relations relevant to the query to construct reasoning path candidates. This allows the LLM to focus on a smaller, more relevant set of candidates during decision-making, improving efficiency and reducing the need to filter out irrelevant noise. As demonstrated in our experiments, this retrieval-based approach (path-RAG) consistently achieves better overall performance on different settings (Table 1,2).\\n\\nQuantitatively, the hyperparameter $\\\\alpha$ in the scoring function (Eq. (3)) balances short-term outcomes and long-term potential in reasoning paths. A higher $\\\\alpha$ prioritizes paths with long-term benefits, even if they appear sub-optimal initially, whereas a lower $\\\\alpha$ emphasizes immediate gains, potentially overlooking future impacts. We select the $\\\\alpha$ using grid search. As noted in lines 849 to 855, $\\\\alpha$ does not significantly affect the overall system performance, indicating that Eq. (3) is not highly sensitive to variations in $\\\\alpha$.\"}",
"{\"comment\": \"Thank the authors for their detailed responses. However, I still have the concerns about the performance of the method.\\nAccording to Figure 2 in the paper, if the beam width and depth are set to 4 for ToG, the Hits@1 is about 60% on CWQ, but the result in Table 1 is only 57.59%. According to Table 2, when replacing Path-RAG with ToG, the Hits@1 on CWQ is only 59.47%. \\nSimilar to WebQSP, if the beam width and depth are set to 4 for ToG, the Hits@1 should be above 75% according to Figure 2, but the value in Table 1 is only 75.13%.\\nBy comparing the Table 1 and 2, I still have the concerns on the performance of the method.\"}",
"{\"comment\": \"We appreciate the reviewer\\u2019s detailed observations regarding the consistency of our results, as maintaining reliability and rigor in our work is a top priority. We would like to highlight that all the experiments reported in the paper were conducted with three independent runs, with the results averaged to mitigate random variations.\\n\\nUpon reviewing Figure 2 in light of the reviewer\\u2019s comment, we re-checked the experimental logs and identified an anomaly in the CWQ data point of ToG with beam width=4, and depth=4. Specifically, one of the three trials produced an unusually high score, leading to an inflated average in figure 2 (a) and (c) ToG with beam-width=4, and beam-depth=4 (around 60% Hits@1). To validate, we re-ran the experiment using the same configuration and obtained a Hits@1=58.12% at this data point, which generally aligns with the results in Table 1 and 2 which falls within the expected variance range due to stochastic nature of LLMs. **We have corrected this value in Figure 2 (a) and (c), and added a note explaining the adjustment to ensure transparency and prevent further confusion.** We apologize for any oversight and assure it was not intentional. We have also carefully reviewed all other reported results to confirm that this correction does not impact any other findings or conclusions in the paper.\\n\\nRegarding the reviewer\\u2019s concerns about other ToG\\u2019s performance is different in Table 1, Table 2, and Figure 2, we would like to clarify that **these experiments in table1,2 and figure 2 were conducted independently** (to ensure reproducibility under varying configurations). Consequently, minor variations can occur due to factors such as random sampling and LLM stochasticity. However, we would like to highlight these variations are in general within acceptable ranges and do not undermine the overall trends or key conclusions of our work.\"}",
"{\"summary\": \"This paper introduces FiDeLiS, a retrieval-augmented reasoning framework that enhances LLM performance in KGQA tasks. The framework addresses the challenge of ensuring reliable reasoning by anchoring LLM responses to verifiable reasoning paths within knowledge graphs. FiDeLiS consists of two main components: Path-RAG, which retrieves relevant entities and relations from knowledge graphs using a keyword-enhanced mechanism, and DVBS, which constructs reasoning paths using a combination of natural language planning and beam search. A key innovation is the transformation of path scoring into a deductive reasoning task, moving away from traditional logit-based scoring. Through comprehensive experiments across three datasets, the authors demonstrate that FiDeLiS achieves competitive performance compared to existing baselines while being training-free and computationally efficient. The work contributes to making LLM reasoning more reliable and interpretable in knowledge-intensive tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method effectively addresses the reliability issue in reasoning by ensuring each step in the reasoning path can be traced back to the original KG, providing verifiable and interpretable results.\\n2. The introduction of deductive reasoning verification mechanism offers an innovative solution to the reasoning termination problem, which has been a significant challenge in existing approaches.\", \"weaknesses\": \"1. There are minor writing issues (e.g., redundant \\\"based\\\" in line 203, \\\"questins\\\" misspelling in line 790) that should be addressed.\\n2. The paper lacks in-depth analysis of why deductive reasoning verification is more suitable for this task compared to traditional logit-based scoring methods. A theoretical or empirical comparison would strengthen this claim.\\n3. The core assumption in constructing reasoning paths (that earlier timesteps t have reasoning step candidates St with higher semantic similarity to the query, as reflected in Equation 3) needs more thorough analysis. The paper should investigate whether this assumption holds for problems of varying complexity and whether over-reliance on semantic similarity between reasoning steps and queries might lead to errors.\", \"questions\": [\"Regarding Algorithm 2, does Path-RAG maintain the same retrieval strategy when obtaining the next possible reasoning step candidates St? Does it incorporate previously formed reasoning steps to aid in retrieval?\", \"How sensitive is the method to the quality of the knowledge graph? It would be valuable to see an analysis of performance across KGs of varying quality or completeness.\", \"How does the method handle questions that require commonsense reasoning during the inference process? The current description focuses on constructing reasoning steps from existing KG relations, but real-world questions often require combining structured knowledge with commonsense reasoning.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a novel method combining LLMs and KGs, primarily consisting of two modules: Path-RAG and DVBS. Specifically, Path-RAG is responsible for retrieving relevant information from knowledge graphs, while DVBS selects the most promising reasoning paths through beam search. I am pleased to see the experimental performance across three KGQA datasets, where the authors' method shows improvements in both accuracy and efficiency. Additionally, the authors conducted extensive ablation experiments, which enhances the paper's soundness.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors' writing is clear, with well-explained methodology.\", \"The authors achieved state-of-the-art performance across multiple KGQA datasets, demonstrating consistent performance improvements.\"], \"weaknesses\": \"(The following weaknesses represent my second version, which incorporates feedback from the Associate Program Chairs)\\n\\n- The paper's primary contribution appears incremental, as it primarily combines existing retrieval and ranking mechanisms without introducing fundamentally new theoretical insights or technical innovations - the authors should clarify what specific technical advances differentiate their approach from previous retrieval-augmented systems.\\n\\n- Path-RAG appears to be a complex retrieval mechanism, and given that most entities and relations are connected in the knowledge graph, it's unclear how this approach is beneficial. In my view, Path-RAG might be more useful in cases where entities and relations are not directly connected. However, the paper doesn't address how the system handles multi-hop reasoning chains. Additionally, the scoring function is influenced by $alpha$, but it's unclear how $alpha$ is determined. Only qualitative results seem to be provided.\\n\\n- The paper only compares with GoT, missing some new baseline methods, including KG-CoT[1], ToG2[2], and GNN-RAG[3]. the authors should either include these comparisons or provide compelling justification for their exclusion.\\n\\n- Furthermore, there are areas for experimental improvement. The experimental methodology would benefit from evaluation on more recent language models (such as open-source alternatives or current SOTA models) to demonstrate the robustness and generalizability of the proposed approach across different model architectures.\\n\\n\\n\\n---\\n\\n[1] KG-CoT: Chain-of-Thought Prompting of Large Language Models over Knowledge Graphs for Knowledge-Aware Question Answering\\n\\n[2] Think-on-Graph 2.0: Deep and Faithful Large Language Model Reasoning with Knowledge-guided Retrieval Augmented Generation\\n\\n[3] GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning\", \"questions\": \"Can you provide a justification or explanation for the concept of \\\"deductive verification\\\"? I have several concerns regarding the proposed method's ability to perform deductive verification, which is a significant claim within the paper. The model appears to lack a structured knowledge base or set of rules from which it can draw conclusions, which would be necessary for true deductive verification. Given this, I am skeptical of the term \\\"deductive verification\\\" being used in this context.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"We sincerely thank the reviewer for the thoughtful comments. Below, we address each concern in detail:\\n\\n---\\n### **W1: The reviewer is concerned that the novelty of the paper is limited, especially compared with previous work ToG.**\\n\\nWe acknowledge that there are indeed similarities between the proposed method and previous work ToG, especially we both consider using the beam search paradigm for retrieving reasoning paths from KG to guide the LLM reasoning. However, we would like to highlight that our contributions should be not considered as incremental efforts for the following reasons:\\n\\n**(a) Issues of premature stopping or excessive continuation**: as noted in the manuscript (lines 77 to 81), ToG relies on LLMs to determine when to stop extending a reasoning path by assessing whether the current path is adequate to answer the question. However, this evaluation primarily focuses on superficial relevance and does not ensure that each reasoning step is factually accurate or logically consistent with previous steps. This limitation often results in challenges such as premature stopping or excessive continuation of reasoning paths, leading to retrieved paths that are either incomplete or contain incorrect steps. This issue will hinder the performance of LLMs\\u2019 decision making (as shown in the case study from lines 422 to 455).\\n\\nIn contrast, our proposed method uses deductive reasoning as a clear and objective way to decide when to stop extending reasoning paths. Deductive reasoning involves verifying whether each reasoning step logically follows from the previous steps and the user query, making it more reliable and less ambiguous. This approach not only simplifies decision-making but also reduces bias in the reasoning process, ensuring fairer and more accurate stopping criteria. We would like to highlight that this straightforward shift in controlling the end point of reasoning paths demonstrates promising performance, particularly in cases requiring deeper reasoning steps (e.g., CoT and CL-RT with reasoning depths >3), where ToG sometimes struggles to perform effectively.\\n\\n**(b) Issues of inefficiency in handling large reasoning steps candidates**: In addition, ToG did not consider the retrieval process, which means it considers all neighboring entities/relations during reasoning path extension. This can be particularly problematic for large KGs that have a vast number of neighbors available. It will substantially increase the computational cost for LLM reasoning (as more tokens are considered) and introduces noise from irrelevant nodes and edges, which hampers the effectiveness of subsequent LLM\\u2019s decision making process.\\n\\nIn contrast, our method first retrieves entities and relations relevant to the query to construct reasoning path candidates. This allows the LLM to focus on a **smaller, more relevant set of candidates** during decision-making, improving efficiency and reducing the need to filter out irrelevant noise. As demonstrated in our experiments, this retrieval-based approach (path-RAG) consistently achieves better overall performance on different settings (Table 1,2).\\n\\nOverall, we believe these contributions substantiate the novelty of our work and demonstrate meaningful advancements over ToG, particularly in addressing the key limitations mentioned above.\\n\\n---\\n### **W2: Unfair comparison regarding hyper-parameters of beam width and depth.**\\n\\nThank you for pointing out this issue. We would like to clarify that the results presented in Table 2 are based on our reproduction of ToG. In these experiments, we ensured that both methods used the same beam width and depth (both set to 4) to maintain a fair comparison. **We have explicitly added this clarification in the revised version of our paper**. Your observation that ToG achieves higher performance with a beam width and depth of 4 is correct, as both methods leverage beam search, and generally, increasing the search space tends to increase the possibility of finding the promising solutions, which could further enhance the overall performance.\\n\\n---\\n### **W3: Does the overall performance mainly relies on Path-RAG as shown ablation study in Table 2?**\\n\\nWe appreciate the reviewer's observations regarding the role of Path-RAG in our ablation study in Table 2. While Path-RAG plays a significant role in enhancing the overall performance, we would like to emphasize that the improvements are the result of **synergy between Path-RAG and our deductive reasoning verification mechanism**. As shown in Table 2, replacing Path-RAG with ToG results in a performance decline, but still achieves **1.88% improvement on CWQ** and **0.99% on CR-LT** compared to ToG. It demonstrates that the proposed deductive reasoning perform better on cases requires more complex and multi-step reasoning (where CWQ and CR-LT requires longer reasoning steps) and also verify that the performance enhancement is not solely rely on Path-RAG module.\"}",
"{\"comment\": \"We sincerely thank the reviewer for their thoughtful comments. Regarding the writing issues mentioned in **W1**, we have revised the manuscript and corrected the errors. For the other concerns, we provide detailed responses below:\\n\\n---\\n### **W2: Why deductive reasoning verification is more suitable compared to traditional logit-based scoring methods?**\\n\\nLogit-based scoring methods rely on softmax-based probability scores to assess plausibility. These scores lack interpretability and often exhibit overconfidence, where invalid reasoning steps can sometimes be assigned higher probabilities due to inherent model biases. In contrast, our proposed deductive verification follows the idea that each intermediate reasoning step should be **verified** whether it is logically consistent with the previous steps and whether it can contribute to answering the user query (as shown in the example in Lines 987\\u20131020). This mechanism helps minimize the error propagation when extending the reasoning paths, especially for cases when reasoning paths seem plausible but contain subtle errors in intermediate steps.\\n\\nTo validate this claim, we add additional experiment setting using \\\"logit-based scoring\\\", where the endpoint of reasoning path extension was framed as a binary classification task (\\u201cyes\\u201d or \\u201cno\\u201d) and we use the corresponding probability scores as the decision criteria. The comparison between deductive reasoning verification and logit-based scoring, as well as adequacy verification (used in ToG), is shown below:\\n\\n| Methods | WebQSP (hits@1) | CWQ (hits@1) |\\n|---|:---:|:---:|\\n| FiDeLis + deductive verification | 79.32 | 63.12 |\\n| FiDeLis + adequacy verification (used in ToG) | 74.13 | 57.23 |\\n| FiDeLis + logit-based scoring | 73.47 | 54.78 |\\n\\nThe results show that deductive reasoning consistently outperforms logit-based scoring by ensuring better logical grounding and reducing overconfidence errors.\\n\\n---\\n### **W3: Does Path-RAG perform well across varying complexities, and does it over-rely on semantic similarity?**\\n\\nWe appreciate the reviewer's concern. As shown in Figure 3(a) and (b), Path-RAG consistently achieves a higher coverage ratio of ground-truth reasoning steps compared to baseline across varying complexities of questions (controlled by the reasoning depths required to answer each question). Unlike vanilla retrievers that rely solely on semantic similarity, Path-RAG incorporates structural information via next-hop connections (as shown in Eq. (3)), which significantly enhance the retrieval performance.\\n\\nTo further validate this, we add another baseline from KAPING[1], and report the coverage ratio comparison on CWQ as follows:\\n\\n| Method | Depth=1 | Depth=2 | Depth>3 |\\n|---|:---:|:---:|:---:|\\n| Vanilla Retriever | 59.34 | 52.17 | 47.31 |\\n| KAPING (top-k triplet retrieval) | 65.72 | 60.41 | 53.11 |\\n| Path-RAG | 72.61 | 69.38 | 62.78 |\\n\\nThese results demonstrate that Path-RAG consistently achieves higher coverage, especially for deeper reasoning paths. Unlike the triplet-based retrieval in KAPING, Path-RAG leverages graph structure to capture not only highly relevant nodes and edges but also intermediate \\u201cbridge\\u201d connecting other highly relevant nodes and edges in next hop neighbors. This design mitigates over-reliance on semantic similarity alone by ensuring that structural relationships are also considered, reducing the likelihood of errors in reasoning path construction.\\n\\n* [1] Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering: https://arxiv.org/pdf/2306.04136\"}",
"{\"metareview\": \"The paper introduces FiDeLiS, a retrieval-augmented reasoning method for knowledge graph question answering (KGQA). It employs Path-RAG for retrieving relevant entities and Deductive-Verification Beam Search (DVBS) for constructing and verifying reasoning paths. While the approach addresses reasoning reliability and efficiency in KGQA, it has substantial weaknesses. The paper lacks theoretical novelty, relying on incremental improvements over prior work such as ToG. The experimental comparisons are incomplete, omitting critical baselines like KG-CoT and ToG 2.0. Additionally, the reliance on ad hoc modules like Path-RAG raises questions about general applicability and scalability. The writing is unclear in several sections, with important methodological details missing or insufficiently explained.\\n\\nStrengths include addressing a relevant problem, demonstrating empirical improvements on KGQA benchmarks, and releasing code for reproducibility. However, the lack of novelty, incomplete evaluation, and unclear methodology outweigh the contributions.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised concerns about limited novelty, unclear methodological choices, and incomplete baseline comparisons. While the authors provided clarifications and additional experiments, key issues remain unresolved. Incremental contributions and missing critical baselines were particularly problematic. The authors\\u2019 efforts to address these concerns were appreciated but insufficient to meet the acceptance threshold. The paper\\u2019s limitations justify the rejection decision.\"}",
"{\"summary\": \"This paper proposes a method for extracting knowledge from a Knowledge Graph to enhance model reasoning capabilities. The core innovation of this paper lies in the Deductive-Verification Guided Beam Search, which enhances efficiency by allowing the model to select the top-k reasoning steps and implement pruning during path extension.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The application of this approach demonstrated significant improvements across various metrics on three KGQA datasets.\\n2. This study substantiates that knowledge graphs can to a certain degree mitigate the hallucination problem in model reasoning within open-domain question answering.\", \"weaknesses\": \"1. Several sections of the manuscript do not adequately convey essential information. For example, the title lacks precision in capturing the central elements of the study, and the abstract does not sufficiently highlight the primary research questions or issues.\\n2. In the experiments, comparing Fine-tuned LLM with GPT-4 seems unfair.\\n3. The Vanilla retriever compared in Section 3.3 is a relatively simple retriever.\", \"questions\": \"In retrieving specific knowledge from a knowledge graph, this paper proposes a method based on keyword extraction from the question and similarity assessment. However, Section 3.3 only compares embedding models. Does the accuracy of keyword extraction significantly impact the effectiveness of the retrieval approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your response. Based on the additional keyword extraction ablation experiments, it appears that the role of keywords in your approach is minimal, with only a 3.83% improvement at depth 1. However, this raises another point of confusion for me\\u2014if the keyword extraction step is completely removed, how do the subsequent steps of the approach function? My original intention was to evaluate how the correctness of keyword extraction impacts the final results. Overall, while I think there is value to the theoretical findings of the paper, I keep my score.\"}",
"{\"comment\": \"---\\n### **W3: Adding more recent baselines and more model backbones.**\\n\\nWe acknowledge that **KG-CoT** and **ToG-2.0** were recently published and are contemporaneous with our own. As they became available around the same time as our submission, we have not considered them as primary baselines in our study. However, we appreciate the reviewers bringing these to our attention and will take them into account in future work.\\n\\nFor the reference **GNN-RAG**, we acknowledge that this paper have shown better performance as a framework requires extra training, while our proposed method is more like a training-free framework. The comparison between these two frames may not be very fair. \\n\\nTo address the reviewers\\u2019 concerns about generalizability and robustness across different model architectures, we add more recent language models as follows:\\n\\n| Methods | WebQSP (hits@1) | CWQ (hits@1) |\\n|---|:---:|:---:|\\n| FiDeLiS+gpt-4o | 86.34 | 73.48 |\\n| FiDeLiS+qwen-2-7B | 64.32 | 50.79 |\\n| FiDeLiS+llama-3.1-8B | 74.41 | 55.73 |\\n| FiDeLiS+mixtral-8x7b-instruct | 68.13 | 52.37 |\\n\\n---\\n### **Q1: Explanation for the concept of \\\"deductive verification\\\"**\\n\\nThank you for pointing out this concern. We understand that the term \\u201cdeductive verification\\u201d might traditionally suggest the use of a formal, rule-based system or a structured knowledge base. In our work, **however**, the term is used more broadly to describe how the LLM infers logical consequences by evaluating whether a given reasoning path aligns with the user query and retrieved evidence from the knowledge graph.\\n\\nFor instance, as described in the manuscript (Lines 989\\u20131020), consider the question: *\\u201cWho is the ex-wife of Justin Bieber\\u2019s father?\\u201d* After one round of beam searching, the current reasoning path is:\\n \\n**\\u201cJustin_bieber \\u2192 people.person.father \\u2192 Jeremy_bieber.\\u201d**\", \"the_next_step_candidates_are\": \"1. *people.married_to.person \\u2192 Erin Wagner*\\n2. *people.person.place_of_birth \\u2192 US*, . . .\\n\\nIn this context, **\\u201cdeductive reasoning verification\\u201d** operates as follows:\\nThe model evaluates whether the user query can be logically deduced by extending the current reasoning path with a candidate step. For instance, we assess whether the candidate **people.married_to.person \\u2192 Erin Wagner** logically supports the query. Specifically, we represent reasoning paths as premises and the user query as a conclusion, then check whether the conclusion can be deduced from the premises.\\n\\n**Premise**: \\n- Justin\\\\_bieber $\\\\to$ people.person.father $\\\\to$ Jeremy\\\\_bieber (**from the current reasoning path**) \\n- Jeremy\\\\_bieber $\\\\to$ people.married\\\\_to.person $\\\\to$ Erin Wagner (**from the next step candidates**)\\n\\n**Conclusion**:\\n- Erin Wagner is the ex-wife of Justin Bieber\\u2019s father. \\n(Using a large language model (LLM) zero-shot approach to reformat the question into a cloze filling task, we use the last entity from the next step candidates, \\\"Erin Wagner\\\", to fill the cloze.)\\n\\nThe LLM is prompted to evaluate whether the conclusion logically follows from the premises. If the answer is \\u201cyes,\\u201d the reasoning path extension is considered complete. If the answer is \\u201cno,\\u201d the reasoning path is either extended further or discarded entirely.\\nThis approach differs from traditional rule-based systems as it does not rely on a predefined set of rules or formalized knowledge bases. Instead, it leverages the LLM\\u2019s implicit knowledge and reasoning capabilities to dynamically evaluate each step in the context of the query\"}",
"{\"summary\": \"The paper proposes a retrieval augmented reasoning method called FiDeLiS for knowledge graph question answering. The method uses Path-RAG to retrieve relevant entities and relations from KG, and conducts a deductive-reasoning-based beam search to generate multiple reasoning paths leading to final answers. The experiments are conducted on three benchmark KGQA datasets, including WebQuestionSP (WebQSP) , Complex WebQuestions (CWQ) and CR-LT-KGQA, and proves its effectiveness. In addition, the paper also conducts extensive experiments for deep analysis and discussion.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Two components of the proposed method, Path-RAG and Deductive-verification Beam Search, are proven to be effective for KGQA.\\n\\nExtensive experiments are conducted on three benchmarks, including ablation study, analysis experiments and case study.\", \"weaknesses\": \"The novelty of the paper is still limited. Although the paper proposes two useful components including Path-RAG and DVBS for KGQA, and also demonstrates their effectiveness, however, the main method still follows the paradigm of ToG (Think on Graph).\\n\\nIn the experiments, (1) the important hyper-parameters such as beam width and depth are different when comparing the proposed method and ToG, which will make the comparison unfair. The paper sets the default beam width as 4 and depth as 4, but ToG sets them as 3 in their paper. However, the paper doesn\\u2019t mention it. According to Figure 2, ToG would obtain higher performance when setting beam width and depth as 4, although it may be still worse than the proposed method. (2) According to the ablation study in Table 2, replacing Path-RAG with ToG would result in substantial performance declines, the performance would be comparable to or even worse than that of ToG. Does that means the improvement of the method mainly relies on Path-RAG?\\n\\nSome parts of the paper should be made more clear. For example, after Retrieval in Path-RAG, we can obtain entities $E_m$ and relations $R_m$, and then iteratively construct reasoning step candidates to extend the reasoning paths based them. In addition, DVBS is designed to prompt LLMs to iteratively execute beam search on the reasoning step candidates. Thus, are the reasoning step candidates are all from $E_m$ and $R_m$? does beam search only execute on the candidates? \\n\\nSome typos and mistakes.\\n\\nLine 203, two based.\\n\\nIn Figure 2 (d), wrong figure.\\n\\nIn Section 3.4, there is no results for ToG in Table 6, so how to obtain the conclusion that the proposed method shows superior efficiency compared to the ToG.\", \"questions\": \"See the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thans for your response!\", \"comment\": \"Thank you to the author for addressing most of my concerns. However, I remain quite concerned about the incremental nature of the method's improvements. Especially after seeing your response that \\\"Paradigm shift from retrieving knowledge facts to reasoning paths from KG,\\\" in my view this approach represents an incremental improvement on existing work. Considering that many previous works have already explored using reasoning paths from KG to enhance RAG and KBQA tasks (dating back to the BERT era) [1,2,3], I believe the technical contribution of this paper may not meet ICLR's bar. Therefore, I will maintain my current score for now, though I remain optimistic and am eager to discuss this work's contributions with other reviewers!\\n\\n---\\n\\n[1] Lan Y, He S, Liu K, et al. Path-based knowledge reasoning with textual semantic information for medical knowledge graph completion[J]. BMC medical informatics and decision making, 2021, 21: 1-12.\\n\\n[2] Li Z, Jin X, Guan S, et al. Path reasoning over knowledge graph: A multi-agent and reinforcement learning based method[C]//2018 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2018: 929-936.\\n\\n[3] Zhu M, Weng Y, He S, et al. Towards Graph-hop Retrieval and Reasoning in Complex Question Answering over Textual Database[J]. arXiv preprint arXiv:2305.14211, 2023.\"}",
"{\"comment\": \"### **Q1: Does Path-RAG maintain the same retrieval strategy when obtaining the next possible reasoning step candidates $S_t$?**\\nYes, the retrieval strategy remains consistent when obtaining the next possible reasoning step candidates $S_t$.\", \"a_follow_up_question\": \"*\\\"Does it incorporate previously formed reasoning steps to aid in retrieval?\\\"*\\n\\nBy design, we do not explicitly use previously formed reasoning steps during the retrieval process. Instead, this information is utilized in the subsequent LLM reasoning step for enhanced decision-making. The rationale for this design is to keep the retrieval process **lightweight** and **focused** solely on identifying the most relevant next-hop candidates, rather than introducing additional complexity by conditioning on earlier steps. By separating the retrieval and reasoning processes, we aim to balance efficiency and accuracy while allowing the LLM to dynamically integrate information from prior reasoning steps.\\n\\n---\\n### **Q2: How sensitive is the method to the quality of the knowledge graph?**\\n\\nTo address the question, we add a new experimental setting where we deliberately manipulated the KG\\u2019s quality. Specifically, we perturbed the relations within the KGs to simulate real-world scenarios where some edges may be mislabeled, missing, or incorrectly connected to unrelated nodes. We consider four perturbation heuristics\\u2014relation swapping, replacement, rewiring, and deletion to represent the main manifestations of inaccuracies of KG as follows (Find the results in **Appendix H**):\\n\\n* **Relation swapping** simulates misclassified or mislabeled relationships.\\n* **Replacement** introduces spurious links to emulate noise.\\n* **Rewiring** reflects structural distortions in graph connectivity.\\n* **Deletion** models missing edges or incomplete knowledge.\\n\\nOur findings, presented in the revised paper (refer to the Appendix H), indicate that the performance of our method remains robust to a reasonable level of perturbation. This robustness is primarily due to our method\\u2019s reliance on both semantic similarity and structural information during retrieval, which helps mitigate the effects of incorrect or incomplete edges. Additionally, the LLM\\u2019s reasoning capabilities provide further resilience by dynamically compensating for some inaccuracies in the retrieved reasoning paths.\\n\\n---\\n### **Q3: How does the method handle questions that require commonsense reasoning during the inference process?**\\n\\nIn this paper, we did not explicitly design mechanisms to handle common sense question answering, as our primary focus is on the KGQA task, where the model answers questions based on knowledge graph (KG) information. However, our proposed method can be adapted to handle commonsense reasoning in the following ways:\\n\\n(1) **Utilizing Commonsense Knowledge Graphs**: There are existing commonsense knowledge graphs (e.g., ConceptNet) that could serve as valuable resources for providing commonsense knowledge. By integrating such graphs into our framework, the proposed method can construct plausible commonsense reasoning paths to support the question-answering process.\\n\\n(2) **Leveraging LLMs for Final Reasoning**: Our method relies on LLMs for the final reasoning step, and these models are inherently equipped with fundamental commonsense reasoning capabilities and pre-trained on vast amounts of general knowledge. This enables the LLMs to handle aspects of commonsense reasoning that are not explicitly covered by the knowledge graph.\"}",
"{\"comment\": \"We sincerely thank the reviewer for their thoughtful comments. Regarding the writing issues mentioned in **W1**, we will polish our paper in the revised version. For the other concerns, we provide detailed responses below:\\n\\n---\\n### **W2: The experiments comparing Fine-tuned LLMs with GPT-4 seems unfair.**\\n\\nWe acknowledge the concern due to GPT-4 being a proprietary model with limited accessibility for fine-tuning or deeper customization. Our decision for this comparison is intended to observe trends whether the proposed **training-free framework** can achieve similar or even better performance compared to methods requiring further training. Our findings demonstrate that the proposed method, as a training-free framework, outperforms fine-tuned models while requiring significantly less time and computational resources for training. \\n\\n---\\n### **W3: The vanilla retriever compared in Section 3.3 is a relatively simple retriever.**\\n\\nWe appreciate the reviewer's concern. To further validate this, we add another baseline from KAPING[1], and report the coverage ratio comparison on CWQ as follows:\\n\\n| Method | Depth=1 | Depth=2 | Depth>3 |\\n|---|:---:|:---:|:---:|\\n| Vanilla Retriever | 59.34 | 52.17 | 47.31 |\\n| KAPING (top-k triplet retrieval) | 65.72 | 60.41 | 53.11 |\\n| Path-RAG | 72.61 | 69.38 | 62.78 |\\n\\nThese results demonstrate that Path-RAG consistently achieves higher coverage, especially for deeper reasoning paths. Unlike the triplet-based retrieval in KAPING, Path-RAG leverages graph structure to capture not only highly relevant nodes and edges but also intermediate \\u201cbridge\\u201d connecting other highly relevant nodes and edges in next hop neighbors. This design mitigates over-reliance on semantic similarity alone by ensuring that structural relationships are also considered, reducing the likelihood of errors in reasoning path construction.\\n\\n* [1] Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering: https://arxiv.org/pdf/2306.04136\\n\\n---\\n### **Q1: Does the accuracy of keyword extraction significantly impact the effectiveness of the retrieval approach?**\\n\\nYes, the keyword extraction impacts the effectiveness of the retrieval approach. We add another ablation study to consider using keywords and without using keywords as follows: (we use coverage ratio to measure the effectiveness of different retrievers)\\n\\n| Method | Depth=1 | Depth=2 | Depth>3 |\\n|---|:---:|:---:|:---:|\\n| Path-RAG w/ keywords | 72.61 | 69.38 | 62.78 |\\n| Path-RAG w/o keywords | 68.78 (**$\\\\downarrow$ 3.83**) | 65.27 (**$\\\\downarrow$ 4.11**) | 57.13 (**$\\\\downarrow$ 5.65**) |\\n\\nIt shows that generating an exhaustive list of keywords related to the query can maximize the coverage of potential reasoning steps required to answer it. These keywords expand the search space for the beam search process by including additional, related information that may not be explicitly stated in the query. During the retrieval stage, this broader keyword list allows the system to identify more potential candidates, which enriches the input for the beam search. As a result, the expanded search space increases the chances of discovering relevant reasoning paths and improves the model\\u2019s ability to find accurate and effective solutions.\"}",
"{\"comment\": \"We sincerely thank all the reviewers for their helpful comments and suggestions, which have been instrumental in improving our paper. Below is a summary of the major concerns raised and how we addressed them:\\n\\n---\\n**(1) Novelty of contributions**: some reviewers raised the concerns that our work is incremental (i.e., ToG) and the novelty is limited. We would like to highlight that while both works build upon the conceptual foundation of reasoning over knowledge graphs (KGs), our work make several original contributions:\\n\\n- **Enhanced retrieval mechanism (path-rag)**: ToG uses iterative beam search to explore paths in a KG, but its reliance on basic pruning strategies limits the recall and diversity of reasoning paths. FiDeLiS introduces a novel retrieval module, Path-RAG, which leverages an LLM to generate an exhaustive set of query-relevant keywords based on the input query. These keywords are then used to retrieve candidate paths from the KG. This approach increases the likelihood of including potentially relevant paths in the reasoning process (ensuring higher coverage of potential paths) and significantly reduces the chances of missing critical reasoning paths, as demonstrated by our comparative experiments (Tables 1 and 2).\\n\\n- **Deductive-Verification Beam Search**: Unlike ToG, which relies on standard LLM predictions for pruning, FiDeLiS incorporates **deductive reasoning verification** to validate each reasoning step. This ensures that reasoning paths are logically sound and grounded in the KG, addressing ToG\\u2019s vulnerability to misleading paths caused by noisy LLM predictions.\\n\\n- **Efficiency Without Training**: Like ToG, FiDeLiS is a training-free framework. However, it is computationally more efficient due to the streamlined Path-RAG and DVBS components, which minimize redundant searches while maintaining performance. This makes FiDeLiS more practical for resource-constrained scenarios, as demonstrated by efficiency analysis in Table 6.\\n\\n**Positioning our work**:\\nWhile ToG introduced a significant framework for reasoning over KGs, its scope is constrained by reliance on static beam search and limited error-checking mechanisms. In contrast, our framework starts from **identifying** two fundamental questions when reasoning over KGs: (1) how to retrieve specific knowledge from KG to allow precise reasoning?; (2) how to make the reasoning model understand and utilize the retrieved structured knowledge. To this end, we propose two key modules, Path-RAG and DVBS, which emphasize logical correctness and faithfulness while controlling the efficiency. These modules directly tackle a core limitation of ToG, where reasoning paths often lead to plausible yet unverified steps.\\n\\nWe would like to also mention that incremental advancements are often necessary to solve practical challenges and refine methods. While our work has some overlap with ToG, it addresses crucial gaps in **retrieval recall**, **logical validation**, and **efficiency** issues. These improvements represent a substantial step forward in making KG-enhanced reasoning both more accurate and scalable.\\n\\n---\\n**(2) Comparison with Baselines**: some reviewers questioned the fairness of comparisons with ToG and noted the absence of recent baselines and more backbone models:\\n\\n- **Fair Hyperparameter Settings**: We ensured that both ToG and FiDeLiS used identical beam width and depth settings (set to 4) in all experiments to maintain fairness. This clarification has been explicitly added to the revised manuscript.\\n\\n- **Integration of Recent Baselines**: While KG-CoT and ToG-2.0 were published contemporaneously with our work, we acknowledge their importance and plan to include comparisons in future work. For GNN-RAG, we clarified that its training-based framework is fundamentally different from our training-free approach and less directly comparable.\\n\\n- **Additional Backbone Models**: To demonstrate robustness, we extend evaluations to include newer LLMs, such as GPT-4o, Qwen-2-7B, and LLaMA-3.1-8B. The results (e.g., FiDeLiS + GPT-4o achieving 86.34% Hits@1 on WebQSP) underscore the generalizability of FiDeLiS across different architectures.\\n\\n---\\n**(3) Role and effectiveness of Path-RAG**: we add new ablations to isolate the Path-RAG\\u2019s impact and compare it with semantic-only methods. We observed that Path-RAG outperformed KAPING across varying complexities of questions. While impactful, Path-RAG alone does not account for all performance gains, as deductive reasoning provides complementary benefits as well.\\n\\n---\\n**(4) Robustness to KG quality and commonsense reasoning**: we add new experiments to simulate real-world KG inaccuracies through perturbations (e.g., relation swapping, deletion). We observe that FiDelis remains robust and achieves only minor performance drops under a reasonable perturbation level.\"}",
"{\"comment\": \"Thanks for the follow-up question. We would like to clarify that the setting w/o keywords refers to when we only use the original query instead of the keywords to calculate the similarity score as shown in Eq (2), and for the subsequent processing, we keep them the same to make the comparison fair.\\n\\n**w/ keywords** (where $K$ refers to the keywords):\\n\\n$E_m = \\\\operatorname{arg\\\\,top_m}_{i \\\\in E} \\\\cos(z(K), z(e))$\\n\\n$R_m = \\\\operatorname{arg\\\\,topm}_{i \\\\in R} \\\\cos(z(K), z(r))$\\n\\n**w/o keywords** (where $q$ refers to the user query):\\n\\n$E_m = \\\\operatorname{arg\\\\,top_m}_{i \\\\in E} \\\\cos(z(q), z(e))$\\n\\n$R_m = \\\\operatorname{arg\\\\,topm}_{i \\\\in R} \\\\cos(z(q), z(r))$\\n\\nIn addition, we would like to highlight that this improvement on the retrieval stage should not be considered minimal considering the large scale of the knowledge graph, where even small gains reflect improvements in the system's ability to retrieve relevant reasoning steps amidst vast amounts of candidates. Additionally, the benefit of adjusting keywords is more notable in queries that require deeper reasoning, with improvements of 4.11% at depth 2 and 5.65% at depths greater than 3. It's also important to note that keyword utilization is only one aspect of our approach. Our method outperforms standard baseline retrievers like KAPING, achieving even more substantial performance enhancements (as shown in the above response).\\n\\nWe are eager to address any additional concerns you might have. Please let us know if there are specific issues or expectations we should meet to improve our rating. We are committed to responding effectively to your feedback. Thanks.\"}"
]
} |
ETFfXGM3e4 | SAT-LDM: Provably Generalizable Image Watermarking for Latent Diffusion Models with Self-Augmented Training | [
"Lu Zhang",
"Liang Zeng"
] | The proliferation of AI-generated images necessitates effective watermarking to protect intellectual property and identify fake content. While existing training-based watermarking methods show promise, they often struggle with generalization across diverse image styles and tend to produce noticeable artifacts. To this end, we introduce a provably generalizable image watermarking method for Latent Diffusion Models with Self-Augmented Training (SAT-LDM), which aligns the training and testing phases by a free generation distribution to bolster the watermarking module’s generalization capabilities. We theoretically consolidate our method by proving that the free generation distribution contributes to its tight generalization bound without the need to collect new data. Extensive experimental results demonstrate that SAT-LDM achieves robust watermarking while significantly improving the quality of watermarked images across diverse styles. Furthermore, we conduct experimental analyses to demonstrate the strong generalization abilities of SAT-LDM. We hope our method offers a practical and convenient solution for securing high-fidelity AI-generated content. | [
"image generation",
"watermarking",
"latent diffusion model"
] | Reject | https://openreview.net/pdf?id=ETFfXGM3e4 | https://openreview.net/forum?id=ETFfXGM3e4 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"y4RPV03ubp",
"mr1KCRkcHi",
"mKLHP6vsOz",
"mDfho9MdxB",
"kUJKqLmivv",
"j38LQP4lPK",
"iYMUnoDYMR",
"gj9XVcbQ2d",
"fJIFihoJSB",
"cCIBFIeWch",
"b9IYroqJZq",
"aXEH808Mxi",
"W0PujyQ12J",
"RcE1S6Oug5",
"PmERT5kwdo",
"PfA6LD6rSH",
"M2oi27TO65",
"LShJzc1e7R",
"JjgzbJ1wf9",
"HgBgGx6tRu",
"FSfAhguRo8",
"FQTyzfwEWU",
"8h8ME5bRdb",
"2iYJFbFBsM",
"2A84AegD5Y",
"0f30yJ6RBN"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732701558704,
1730707283842,
1732598940350,
1733204460146,
1734759955015,
1733150935772,
1733126282139,
1729685406482,
1732599665064,
1733204078975,
1732599175428,
1732598997956,
1737523582536,
1732597062092,
1732896769003,
1733153230544,
1732599362308,
1732598748849,
1732725908854,
1730233298941,
1732599284837,
1732597730071,
1732597397535,
1729640658861,
1733128997167,
1732678682667
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3553/Reviewer_U5Eu"
],
[
"ICLR.cc/2025/Conference/Submission3553/Reviewer_UfTs"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Area_Chair_FMGE"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Reviewer_U5Eu"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Reviewer_iTqJ"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Reviewer_UfTs"
],
[
"ICLR.cc/2025/Conference/Submission3553/Reviewer_iTqJ"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3553/Reviewer_gLwe"
],
[
"ICLR.cc/2025/Conference/Submission3553/Reviewer_gLwe"
],
[
"ICLR.cc/2025/Conference/Submission3553/Reviewer_gLwe"
]
],
"structured_content_str": [
"{\"title\": \"Official comment to authors\", \"comment\": \"I appreciate the time and effort the authors put in into answering my questions. I would like to keep my score.\"}",
"{\"summary\": \"This paper studies how to watermark diffusion models across diverse image styles. The authors propose a training-based watermark method SAT-LDM. In particular, the authors plug a message processor into the VAE decoder to obtain watermarked images from latents. During the training, SAT-LDM jointly trains the message processor and message extractor. The diffusion model is fixed during the training. No external data is required for the training. Theoretical analysis is provided to demonstrate the generalization ability of the proposed method. Experiments show that SAT-LDM can generalize across different image styles.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The writing is clear and easy to follow.\", \"The method does not require additional training data.\", \"A theoretical guarantee is provided for the proposed method\", \"Experiments show that the method produces effective watermarks while maintaining high image fidelity.\"], \"weaknesses\": \"- More visualization results, especially of different image styles, can help demonstrate the generalization of the proposed SAT-LDM.\\n\\nThe authors addressed my concerns.\\nI agree with reviewer iTqJ that the novelty of the proposed method might be relatively limited.\\nTherefore I will maintain my score.\", \"questions\": \"Please refer to the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"To Reviewer U5Eu \\u2014 Part I\", \"comment\": \"Thank you for your time and thoughtful dedication to our paper! We have addressed your concerns below and revised the paper to incorporate the reviewers' suggestions. Please let us know if you have further questions.\\n\\n---\\n\\n> **Q1:** The assumption on the Lipschitz continuity of the loss function is somewhat strong: \\n1) How can one check if it is true? \\n2) How can one estimate the Lipschitz constant $K$, which may be large and lead to an unsatisfactory upper bound? \\n\\n**A1:** \\n1) **Verifying Lipschitz Continuity:** \\nMany common loss functions, including Mean Squared Error (MSE) and Cross-Entropy Loss, are Lipschitz continuous when predictions $\\\\hat{y}$ are constrained to bounded domains. For example, the MSE loss $L(y, \\\\hat{y}) = (\\\\hat{y} - y)^2$ is Lipschitz continuous when $\\\\hat{y}$ is restricted to a closed interval $[a, b]$, as the gradient $\\\\nabla L = 2(\\\\hat{y} - y)$ is bounded within this range. Similarly, the Cross-Entropy Loss is Lipschitz continuous when prediction probabilities $\\\\hat{y}$ are confined to $(0, 1)$, ensuring bounded gradients.\\nAdditionally, activation functions commonly employed in neural networks, such as ReLU and Sigmoid, are proven to be Lipschitz continuous in prior works [1].\\n\\n2) **Estimating the Lipschitz Constant $K$:** \\nEstimating the Lipschitz constant of neural networks is an active area of research. For instance, Fazlyab et al. proposed a convex optimization-based approach to efficiently estimate the upper bound of a neural network's Lipschitz constant [2]. Additionally, Latorre et al. employed polynomial optimization techniques to compute tight upper bounds for $K$ [3]. These methods provide both theoretical support and practical tools for estimating $K$.\\n---\\n\\n> **Q2:** Provide a detailed explanation on the assumption of Lipschitz continuity of the loss function.\\n\\n**A2:** For most commonly used loss functions (e.g., MSE loss, Cross-Entropy Loss) [1,4,5], their variations in the parameter space are generally smooth and continuous. Within the bounded data range (e.g., image data or binary vectors in our case), these loss functions are Lipschitz continuous. Furthermore, the fundamental linear mappings and activation functions (e.g., Sigmoid and ReLU) [6,7] used in neural networks are also Lipschitz continuous. Additionally, weights in neural networks are often regularized to prevent overfitting [8,9], which help ensure that the constant $K$ does not grow excessively large. Thus, this assumption is not overly restrictive and can be satisfied by carefully selecting the loss functions, model architectures, and training strategies.\\n\\n---\\n\\n> **Q3:** How to verify an assumption?\\n\\n**A3:** Verifying the Lipschitz continuity assumption can be approached both theoretically and empirically: \\n\\n1) **Theoretical Verification:** \\n - **Function Composition:** Analyze the composition of functions within the model. If each individual function is Lipschitz continuous with known constants, the overall Lipschitz constant can be derived based on the composition rules. \\n - **Bounded Inputs and Parameters:** Ensure that the inputs to the loss function and the model parameters are bounded. Boundedness, combined with Lipschitz continuous activation functions, simplifies the verification of the overall Lipschitz continuity.\\n\\n2) **Empirical Verification:** \\n - **Gradient Norms:** Compute the gradient norms of the loss function with respect to inputs over a validation dataset. Consistently bounded gradient norms provide empirical evidence supporting Lipschitz continuity. \\n - **Spectral Norm Analysis:** Utilize techniques like spectral normalization to empirically bound the Lipschitz constant $K$. By normalizing the spectral norms of weight matrices, we can effectively control and estimate $K$.\"}",
"{\"title\": \"Further Clarification to Reviewer UfTs\", \"comment\": \"Thank you for your feedback. We appreciate your acknowledgment that our rebuttal effectively addressed your concerns. Notably, **reviewer iTqJ has recognized the theoretical novelty of our proposed method following the rebuttal**. In light of this clarification, we kindly request you to reevaluate your score or opinion.\"}",
"{\"metareview\": \"2x borderline accept, 2x borderline reject. This paper studies how to watermark diffusion models across diverse image styles by introducing a training-based watermark method that avoids external datasets and provides theoretical guarantees on generalization. The reviewers agree on the (1) clear writing and solid theoretical framing, (2) effectiveness of watermark embedding without reliance on external data, (3) promising performance on image fidelity and robustness, and (4) value of the self-augmented training approach. However, they note (1) limited novelty relative to existing diffusion-native watermarking methods, (2) incomplete analysis of false positives and distribution assumptions, (3) potential mismatch between unconditional and conditional generation in real use cases, and (4) insufficient ablations distinguishing the benefits of new training data from structural changes. The authors have followed up with additional experiments, clarifications on distribution gaps, and new results on FPR and guidance scales, yet several reviewers remain partially unconvinced, so the AC leans to not accept this submission.\", \"additional_comments_on_reviewer_discussion\": \"N/A\"}",
"{\"title\": \"Further clarification to reviewer gLwe\", \"comment\": \"Thank you very much for your patience and timely responses throughout the review process. We greatly appreciate your detailed feedback. However, we believe there might still be some misunderstandings, and we would like to clarify them further.\\n\\n---\\n\\n> **Q1:** It may require diverse proof like even the feature for the latent space of the clip in SD to give solid argument.\\n\\n**A1:** Our hypothesis focuses on the denoised latent embeddings. Thus, directly analyzing *\\\"the feature for the latent space of the clip in SD\\\"* might not be as relevant or intuitive, as the complete denoising process involves multiple \\\"clip\\\"-assisted denoising steps. In contrast, analyzing the distribution of the denoised latent embeddings directly is more straightforward, which is what we have conducted. Specifically, we have performed extensive t-SNE visualizations on these denoised embeddings and also supported this assumption indirectly with a Wasserstein metric analysis. These experiments collectively enforce the feasibility of our assumptions in practical scenarios.\\n\\n---\\n\\n> **Q2:** The reviewer's update mostly convince me, but the claim of *\\u201cthe similarity between the composite distribution of all prompts and the free/unconditional distribution\\u201d* is still too strong for me.\\n\\n**A2:** Regarding our hypothesis, ideally, $p\\\\left(\\\\mathbf{z} \\\\mid \\\\mathbf{\\\\epsilon}\\\\right) = \\\\sum p\\\\left(\\\\mathbf{z} \\\\mid \\\\mathbf{\\\\epsilon}, \\\\mathbf{x}^\\\\text{prompt}\\\\right) p(\\\\mathbf{x}^\\\\text{prompt})$, which essentially computes the conditional probability distribution by marginalizing over the prompt variable $\\\\mathbf{x}^\\\\text{prompt}$. This assumption is intuitive and aligns with similar idea presented in prior work, such as [1].\\n\\n---\\n\\nThank you once again for acknowledging the contributions of our work! We kindly ask you to reconsider this aspect one last time. If our response have addressed your concerns and you find it reasonable, we would be sincerely grateful if you could reevaluate your score. \\n\\nRegardless of your decision, we deeply appreciate your valuable feedback and guidance, which have greatly enhanced the quality of our work. Reviewers like you make this submission process an enriching and rewarding experience for us.\\n\\n---\\n\\n**References:** \\n[1] Lu, Y., et al. \\\"Prompt Distribution Learning.\\\" CVPR 2022.\"}",
"{\"title\": \"Looking Forward to Further Feedback\", \"comment\": \"Dear Reviewer iTqJ,\\n\\nThank you for taking the time to review our submission and provide your thoughtful feedback. We hope our rebuttal has adequately addressed your concerns. Specifically, we have clarified the misunderstandings, highlighted our main contributions, and provided additional ablation experiments. As the discussion period approaches its end, we kindly request that you review these points and consider updating your evaluation accordingly. Moreover, if you find our response satisfactory, we kindly ask you to consider the possibility of improving your rating.\\n\\nThank you very much for your valuable contribution, and we look forward to your response.\\n\\nBest regards, \\nThe authors\"}",
"{\"summary\": \"The paper proposes an image watermarking scheme that generalizes across different image styles. The authors prove the generalization bound for the watermarked image generator and the message extractor and compare their method against several state-of-the-art approaches in terms of image quality and robustness to removal attacks.\\n\\nPlease use this review sparingly.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Authors provide a theoretical guarantee on the generalization error of their approach that, to my knowledge, has not been done in the previous works. Experimental evaluation demonstrates that the proposed method yields watermarked images of better quality than of the competitor's works. The robustness to removal attacks is on the level of state-of-the-art methods.\", \"weaknesses\": \"The assumption on the Lipschitz continuity of the loss function is somewhat strong: 1) how can one check if it is true and 2) estimate the Lipschitz constant K (what can be quite large leading to unsatisfactory large upper bound)\", \"questions\": \"I am willing to increase my score if authors provide a detailed explanation on the assumption of Lipschitz continuity of the loss function. How to verify an assumption? Does it hold in practice?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"General Response\", \"comment\": \"We want to express our gratitude for the valuable comments and constructive feedbacks of all reviewers. We are encouraged that the advantages of our paper were generally appreciated by the reviewers. The reviewers thought *\\u201cThe writing is clear and easy to follow. A theoretical guarantee is provided for the proposed method. Experiments show that the method produces effective watermarks while maintaining high image fidelity.\\u201d* (Reviewer UfTs), *\\u201cThe paper focuses on a significant topic to address the growing need for copyright protection of AI-generated content.\\u201d* (Reviewer iTqJ), *\\u201cAuthors provide a theoretical guarantee on the generalization error of their approach that, to my knowledge, has not been done in the previous works. The robustness to removal attacks is on the level of state-of-the-art methods.\\u201d* (Reviewer U5Eu), *\\u201cThe observation that there is a mismatch between the training and testing phases is crucial and can greatly inform future research. The theoretical section is informative.\\u201d* (Reviewer gLwe).\\n\\nWe have carefully gone through each review and summarize several important questions commonly raised and address them in this thread. We have also uploaded a rebuttal version of the paper with the following revisions (all new texts and contents are in blue, note that this is not the final version):\\n\\n1. **Additional Visual Comparisons**: We have included diverse visualization examples across different image styles in Appendix Section F. \\n2. **New Experimental Analyses**: \\n - Extended experiments on watermark detection and user identification with larger datasets (e.g., 5,000 images). \\n - Results on different pretrained models.\\n - Experimental analysis using LAION-400M as the test distribution.\\n - Additional experiments conducted with a guidance scale of 1.\\n - Evaluation of training performance when using LAION-400M captions as prompts.\\n3. **Discussion on Lipschitz Continuity of Loss Function**: We have included discussion on lipschitz continuity assumption of the loss function in Appendix Section G, to discuss its rationality and verifiability in practice.\"}",
"{\"title\": \"Further Clarification to Reviewer iTqJ\", \"comment\": \"Thank you for your detailed feedback. However, we believe there might still be some misunderstandings, and we would like to provide further clarification.\\n> **Q1:** actually 'free' (what you proposed) is not really a watermarking method, but a training strategy to improve the generalization ability of the watermarking methods, which is orthogonal to proposing a new watermarking method, and that's why your implementation is highly depended on FSW in this paper.\\n\\n**A1:** Thank you for your thoughtful analysis of our method. Regarding the positioning of the \\\"free\\\" strategy, our primary goal was to **rethink how watermarking modules in LDM are trained** and propose a training approach that enhances training-based watermarking methods. As you pointed out, this strategy extends beyond watermarking tasks; the same design principle can be applied to other tasks with similar architectures that demand improved generalization capability. We greatly appreciate your insights into the broader potential of our method.\\n\\nOur decision to adopt the FSW-like structure was driven by the need to control experimental variables, ensuring fairness and clarity in demonstrating the core advantages of the \\\"free\\\" strategy. This choice also aligns with our commitment to scientific rigor and resource constraints. For example, training our method from scratch on FSW-like structure using a single RTX 4090 takes less than half a day on average, whereas Stable Signature requires 8 GPUs for around a full day [1], not including its further fine-tuning for different users. According to the Stable Signature paper, their experiments demanded approximately **2000 days or \\u2248 50,000 GPU-hours**. Additionally, Stable Signature requires separate fine-tuning for different users, whereas the more flexible watermarking design like FSW represents a forward-looking trend in this field.\\n\\n---\\n\\n> **Q2:** Besides, your motivation now comes from 'better generalizing to different image styles when applied to real-world scenarios'. It is kind of tricky to prove such generalization capability. Yes, you proved that the watermarked images have good quality compared to the original image in terms of PSNR and SSIM, those pixel-to-pixel metrics. But there is still a gap between \\\"a good generalization capability\\\" and \\\"a good image quality\\\".\\n\\n**A2:** First, we would like to clarify that we have never claimed, nor attempted to demonstrate that, \\\"good generalization ability\\\" and \\\"good image quality\\\" are equivalent. Rather, our definition of \\\"a good generalization capability\\\" explicitly includes **image watermark robustness**. A robust watermarking method should embody both attributes; therefore, we believe it is essential to address both aspects, rather than overlooking one.\\n\\nSecond, it appears your concern pertains to the gap between theory and practice in our work. However, we argue that this gap does not signify a disconnect, but rather reflects the inherent challenges of aligning local experimental metrics with the complexities of real-world data. In our theoretical analysis, we employed the **Wasserstein distance** to demonstrate that the \\\"free\\\" generation distribution mitigates the discrepancy between training and testing distributions, providing theoretically supports for improved generalization. \\n\\nIn our experiments, we used **watermarked image quality metrics** and **watermark robustness** to assess generalization across different prompts and image styles as **proxy indicators** of **generalization ability**. While these metrics may not fully capture all facets of generalization, they are interpretable, practical and relevant for real-world applications. Furthermore, we designed experiments across diverse data distributions (e.g., COCO, LAION-400M, Diffusion Prompts, and AI-generated prompts) to reflect real-world generalization performance. This combination of theoretical analysis and empirical evaluation strengthens our argument of improved generalization.\\n\\n---\\n\\nThank you once again for acknowledging the contributions of our work! As the rebuttal process is coming to a close, we would appreciate if you could let us know if you have any further concerns and/or consider raising the score.\\n\\n\\n**References:** \\n[1] Fernandez, P., et al. \\\"The stable signature: Rooting watermarks in latent diffusion models.\\\", in CVPR 2023.\"}",
"{\"title\": \"To Reviewer gLwe\", \"comment\": \"Thank you for your time and dedication to our paper! We have addressed your concerns below and revised the paper to incorporate the reviewers' suggestions. Please let us know if you have further questions.\\n\\n---\\n\\n> **Q1:** The authors claim that under 1,000 AI-generated prompts, the proposed method surpasses the baseline. However, this number of prompts is far too small for general case measurements, especially considering potential bias in the language models used. \\n\\n**A1:** Due to the high computational cost, we chose 1,000 prompts for comparison, consistent with prior works such as *FSW*[1], *Tree-Ring*[2], and *Gaussian Shading*[3]. Additionally, we have conducted experiments on two new tasks\\u2014watermark detection and user identification (Section E.1)\\u2014with generated image sizes of 1,000 and 5,000, respectively. These experiments further demonstrate the effectiveness of our method.\\n\\n---\\n\\n> **Q2:** Additionally, this comparison seems unfair. The authors should provide a comparison of results using selected prompts from LAION-400M (but not used during training). The reported advantage may solely come from the prompt distribution shift between GPT-generated prompts and those from LAION-400M. Since the proposed method forgoes all prompt information during training (\\u201cusing empty prompt\\u201d), it avoids this shift and may appear better (due to bias from GPT-generated prompts). \\n\\n**A2:** Thank you for pointing this out. We would like to emphasize that Table 1 already includes comparisons across diverse datasets: COCO, LAION-400M, Diffusion Prompts, and AI-Generated Prompts. Importantly, the prompts used during testing are independent of those in training, ensuring both robustness and fairness in the evaluation.\\n\\n---\\n\\n> **Q3:** Fig. 3 can be interpreted as $ d(z_{\\\\text{laion-prompt}}, z_{\\\\text{gpt-prompt}}) > d(z_{\\\\text{no-prompt}}, z_{\\\\text{gpt-prompt}}) $, which does not provide any evidence that $ d(z_{\\\\text{real-prompt}}, z_{\\\\text{no-prompt}}) $ is small in most cases. \\n\\n**A3:** To clarify, the interpretation should be $ d(z_{\\\\text{laion-image}}, z_{\\\\text{gpt-prompt}}) > d(z_{\\\\text{no-prompt}}, z_{\\\\text{gpt-prompt}}) $, as the training distributions consist of images from LAION and free distributions. \\nRegarding the specific concern that $ d(z_{\\\\text{real-prompt}}, z_{\\\\text{no-prompt}})$ is small in most cases, we provide additional analysis in Section E.3 (see A13) where we replace GPT-generated prompts with real-world LAION prompts, yielding similar trends and strengthening the robustness of our claims. This approach, while not exhaustive, offers a practical approximation to the diversity of real-world settings and further corroborates the effectiveness of our proposed method.\\n\\n---\\n\\n> **Q4:** In fact, it is widely believed that there is a large distance between the conditional and unconditional distributions in DMs, which is why we use (either classifier or classifier-free) guidance. \\n\\n**A4:** We would like to clarify that our theoretical assumption focuses on the similarity between the composite distribution of all prompts and the free/unconditional distribution, rather than on the specific conditional distribution to the unconditional distribution.\"}",
"{\"title\": \"To Reviewer U5Eu \\u2014 Part II\", \"comment\": \"> **Q4:** Does it hold in practice?\\n\\n**A4:** By incorporating regularization techniques such as spectral normalization [10] and weight decay, we can empirically maintain the Lipschitz constant $ K $ within a manageable range, making this assumption both practical and reasonable in real-world scenarios.\\n\\n---\\n\\n**References:** \\n[1] Anil, C., Lucas, J., & Grosse, R. (2019). Sorting out Lipschitz function approximation. *arXiv preprint arXiv:1903.03252*. \\n[2] Fazlyab, M., Morari, M., & Pappas, G. J. (2019). Efficient and accurate estimation of Lipschitz constants for deep neural networks. *NeurIPS 2019*. \\n[3] Latorre, F., Lodi, A., & Martello, S. (2020). Lipschitz constant estimation for Neural Networks via sparse polynomial optimization. *ICLR 2019*. \\n[4] Bousquet, O., & Elisseeff, A. (2002). Stability and generalization. *Journal of Machine Learning Research*, 2, 499\\u2013526. \\n[5] Zhang, C., et al. (2021). Understanding deep learning (still) requires rethinking generalization. *Communications of the ACM*, 64(3):107\\u2013115. \\n[6] Bartlett, P. L., Foster, D. J., & Telgarsky, M. J. (2017). Spectrally-normalized margin bounds for neural networks. *NeurIPS 2017*. \\n[7] Ledoux, M. (2001). The concentration of measure phenomenon. *Mathematical Surveys and Monographs, Vol. 89*. \\n[8] Bengio, Y., et al. (2017). Deep learning. *MIT Press*. \\n[9] Srivastava, N., et al. (2014). Dropout: A simple way to prevent neural networks from overfitting. *Journal of Machine Learning Research*, 15(56), 1929\\u20131958. \\n[10] Takeru, M., et al. (2018). Spectral normalization for generative adversarial networks. *arXiv preprint arXiv:1802.05957*.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"To Reviewer UfTs\", \"comment\": \"Thank you for your time and dedication to our paper! We have addressed your concerns below and revised the paper to incorporate the reviewers' suggestions. Please let us know if you have further questions.\\n\\n---\\n> **Q1:** More visualization results, especially of different image styles, can help demonstrate the generalization of the proposed SAT-LDM.\\n\\n**A1:** Thank you for recognizing the significance of our work. In response to your feedback, we have added additional visualization examples in Section F of the revised manuscript. These new results encompass various image styles, as discussed in Section C. By comparing SAT-LDM with different watermarking methods, we further highlight its generalization capability across diverse image styles. We hope these enhancements address your request, and we sincerely appreciate your efforts in helping us improve the quality of our paper.\"}",
"{\"title\": \"To Reviewer gLwe\", \"comment\": \"We sincerely thank you for your valuable suggestions and for raising the score! We have addressed your new concerns below and have revised the paper accordingly based on your feedback.\\n\\n---\\n\\n> **Q1:** It would maybe clearer to put aggregated result in Fig 1 and detailed result in experiment part.\\n\\n**A1:** We appreciate the reviewer\\u2019s suggestion. Moving Table 1 to the experimental section would indeed enhance the clarity and accessibility of the results. However, implementing this change would require restructuring the figures and related content, which entails a considerable amount of work. Due to the time constraints imposed by the PDF modification policy, we plan to incorporate this adjustment in the camera-ready version.\\n\\n---\\n\\n> **Q2:** About Q3 and Q4, with the additional experiments, I become more confused about the experiment setting. Is the distribution measured by generated images with guidance scale 1 or a larger value such as 7.5 (which is the most common setting for SD generation)?\\n\\n**A2:** To align with practical use cases, we use a guidance scale of 7.5, unless stated otherwise. We appreciate you pointing out this oversight in our paper. In response, we have clarified this in Section 5.3 by adding the statement: *\\\"Unless stated otherwise, the experimental setup follows the description provided in Section 5.1.\\\"*\\n\\n---\\n\\n> **Q3:** Would guidance scale lead to different distribution visualization results? Indeed, with strong visualization results, it could be accepted that using unconditional generation to replace conditional one. Still, this seems to not be fully achievable. \\nIn fact, lots of research has shown that using classifier guidance, the image quality is even largely influenced, as well as its match with the training prompt considering different guidance scales. (And generation is totally different from unconditional one, where the latter usually shows as blurring or content-less generation.) So it is counter-intuitive that one can directly match free and conditional distributions.\\n\\n**A3:** Following the setup described in Section 5.3, we vary only the guidance scale ($gs$) value and visualize the corresponding distributions, as shown in Figure 6 in Appendix E.6. \\nAs illustrated in Figure 6, when $gs \\\\leq 10$, there is almost no noticeable difference between the \\u201cFree\\u201d and \\u201cTest\\u201d distributions. However, at $gs=14$ and $gs=18$, distinct differences emerge. This could be attributed to the excessively high guidance scales ($gs=14$ and $gs=18$), which amplify the guidance signal and lead to certain degrees of distributional deviation. Moreover, as seen in Table 2, when $gs$ changes from 10 to 18, there is a slight degradation in both the FID score and average watermark robustness. Watermarking methods typically aim to achieve Pareto optimality between watermark image quality and robustness. In our case, the observed distributional deviation may shift this Pareto frontier, affecting one or both objectives. Nevertheless, even under extreme conditions like $gs$ = 18, the results remain exceptionally robust. \\nOur method may capture deeper shared features between conditional and free/unconditional distributions. For instance, both are derived from denoising Gaussian noise. While the denoising processes differ, certain shared features might be preserved. These features, while not interpretable by humans, could still hold meaningful information for the model. As a result, watermarking modules trained on unconditional distributions generalize effectively to conditional distributions.\\n\\n---\\n\\n> **Q4:** About the training prompt distribution, as the prompt used for prompting GPT-4 is manually designed, there (should) exist some ablation study about the prompt also. Though I believe this may partially out of the scope of the paper and would accept current result without such ablation study. But the limitation needs to be stated (to inform further study).\\n\\n**A4:** This is indeed an important point that warrants further clarification. Thank you for your suggestion! We have added relevant remarks regarding this limitation in the conclusion section, highlighting it as a direction for future research.\\n\\n---\\n\\nOnce again, we deeply appreciate your patience and highly constructive suggestions! If you have any further questions or concerns, we would be grateful if you could let us know. Moreover, if you find our response satisfactory, we kindly ask you to consider the possibility of improving your rating. Thank you very much for your valuable contribution.\"}",
"{\"title\": \"Thanks for the additional explanation.\", \"comment\": \"Thanks for the explanation and additional experiments. I will increase my score to 5, but still, I don't think this paper is ready to be accepted at this point.\\n\\nI do acknowledge that this paper gives a good theory foundation for their method, self-augmented training, and this aspect should be appreciated. But if we consider this contribution before reading this paper, I would expect this paper will show that 'free' is better than 'external' on different kinds of existing watermarking methods, because actually 'free' (what you proposed) is not really a watermarking method, but a training strategy to improve the generalization ability of the watermarking methods, which is orthogonal to proposing a new watermarking method, and that's why your implementation is highly depended on FSW in this paper. Then the paper structure will be like explaining 'free' is better than 'external' in theory, then proving it by comparing 'free' and 'external' under different existing watermarking methods.\\n\\nBesides, your motivation now comes from 'better generalizing to different image styles when applied to real-world scenarios'. It is kind of tricky to prove such generalization capability. Yes, you proved that the watermarked images have good quality compared to the original image in terms of PSNR and SSIM, those pixel-to-pixel metrics. But there is still a gap between \\\"a good generalization capability\\\" and \\\"a good image quality\\\". I would suggest changing the paper motivation a little bit. For instance, your simple yet effective method can improve the image quality of any existing robust watermarking methods, to achieve the frontier of the tradeoff between the image quality and image watermark robustness.\"}",
"{\"title\": \"To Reviewer gLwe \\u2014 Part III\", \"comment\": \"> **Q11:** Using the unconditional generated distribution seems less meaningful. It might be better to use conditional generated distributions with diverse prompts, such as those from LAION-5B.\\n\\n**A11:** Table 9 compares the results of three different training distributions: \\n1) 30K images from LAION-400M (LAION Image), \\n2) 30K images generated using the captions from 1) as prompts (LAION Prompt), \\n3) 30K images from the free generation distribution (Free). \\n\\n| Training distributions | PSNR \\u2191 | SSIM \\u2191 | FID \\u2193 | None \\u2191 | 1 \\u2191 | 2 \\u2191 | 3 \\u2191 | 4 \\u2191 | 5 \\u2191 | 6 \\u2191 | 7 \\u2191 | Adv. \\u2191 |\\n|-------------------------|--------|--------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| LAION Image | 37.46 | 0.987 | 3.50 | 1.000 | 0.974 | 0.999 | 1.000 | 1.000 | 1.000 | 0.929 | 0.975 | 0.982 |\\n| LAION Prompt | 38.82 | 0.992 | 3.32 | 1.000 | 0.986 | 0.998 | 1.000 | 1.000 | 1.000 | 0.938 | 0.984 | 0.986 |\\n| Free | 40.58 | 0.983 | 2.40 | 1.000 | 0.981 | 0.995 | 0.998 | 0.998 | 0.994 | 0.980 | 0.968 | 0.988 |\\n\\nCompared to \\\"LAION Image\\\", \\\"LAION Prompt\\\" shows slight improvements in PSNR and FID but still falls short of \\\"Free\\\". This could be due to the inherent bias in prompts, similarly to that in images. Increasing the number of images or prompts might help mitigate this bias, but such an approach would require substantial computational resources and time, and could raise concerns regarding data privacy and copyright. Although using the unconditionally generated distribution may seem less meaningful, it offers a simpler and more general way to approximate the model's generative capabilities, even when the dataset used by the generative model is unknown.\\n\\n---\\n\\n> **Q12:** Can the authors provide results using LAION-400M prompts during testing to ensure the performance improvement is more general?\\n\\n**A12:** See A2 for explanation.\\n\\n---\\n\\n> **Q13:** Can the authors offer further insights regarding Fig. 3, such as whether there exists a good subset of LAION-400M prompts that generate images similar to those from GPT-generated prompts? From Fig. 3, there appears to be significant overlap between the External and Test regions. If so, this suggests the proposed testing scenarios may be biased, given that the external training set prompts are more diverse. \\n\\n**A13:** Overlap is expected, as the model\\u2019s generative capability is derived from external data. The key takeaway from Fig. 3 is the noticeable divergence between the \\\"External\\\" and \\\"Test\\\" distributions in their non-overlapping regions. These divergent regions in the \\\"External\\\" distribution correspond to samples beyond the model's generative capacity, thereby introducing noise and limiting generalization when used for training. \\n\\nIndeed, as with any experimental design, testing scenarios may inherently contain biases. To address this, we constructe GPT-4-generated prompts spanning diverse styles and complexities for testing. Additionally, we replace the test data in the \\u201cTraining Distributions\\u201d of Section 5.3 with prompts from LAION-400M (which differs from the training data) and repeate the remaining steps. The results, presented in Table 7 and Figure 5, lead to similar conclusions. While our testing scenarios may not be entirely bias-free, these additional experiments and results further reinforce the robustness and practical relevance of the proposed method across diverse distributions.\\n\\n| Training distributions | PSNR \\u2191 | SSIM \\u2191 | FID \\u2193 | None \\u2191 | 1 \\u2191 | 2 \\u2191 | 3 \\u2191 | 4 \\u2191 | 5 \\u2191 | 6 \\u2191 | 7 \\u2191 | Adv. \\u2191 | $W_1$ \\u2193 |\\n|-------------------------|--------|--------|-------|------|------|------|------|------|------|------|------|------|---------|\\n| External | 38.87 | 0.988 | 2.89 | 1.000| 0.973| 0.999| 1.000| 1.000| 1.000| 0.935| 0.967| 0.982| 898.6 |\\n| Free | 41.30 | 0.982 | 2.21 | 0.998| 0.967| 0.989| 0.993| 0.993| 0.989| 0.970| 0.956| 0.980| 669.4 |\\n\\n---\\n\\n> **Q14**: Could the authors provide additional results and discussion on guidance?\\n\\n**A14**: See A5 for explanation.\\n\\n---\\n\\n> **Q15**: Could there be any ablation study, as previously mentioned, to differentiate between the contributions of the structural and training data improvements?\\n\\n**A15**: See A6 for explanation.\\n\\n---\\n\\n**References:** \\n[1] Xiong, C., et al. \\\"Flexible and Secure Watermarking for Latent Diffusion Model.\\\" ACM MM 2023. \\n[2] Wen, Y., et al. \\\"Tree-rings watermarks: Invisible fingerprints for diffusion images.\\\" NeurIPS 2023. \\n[3] Yang, Z., et al. \\\"Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models.\\\" CVPR 2024.\"}",
"{\"title\": \"To Reviewer iTqJ \\u2014 Part III\", \"comment\": \"> **Q7:** The experimental evaluation lacks critical ablations and comparisons. The paper should investigate the impact of using different pre-trained diffusion models and evaluate the method under strong attacks, such as diffusion-based attacks.\\n\\n**A7:** We have included evaluations on Stable Diffusion v2.1 (see Section E.2). Our results demonstrate that models trained with free distributions significantly improve watermarked image quality while maintaining high robustness on both SDv1.5 and SDv2.1 models. \\n\\n| Pretrained Models | Training Distributions | PSNR \\u2191 | SSIM \\u2191 | FID \\u2193 | Bit Accuracy \\u2191 (None) | Bit Accuracy \\u2191 (Adv.) | AUC/T\\\\@0.001%F \\u2191 (None) | AUC/T\\\\@0.001%F \\u2191 (Adv.) | Trace 10\\u2074/Trace 10\\u2075/Trace 10\\u2076 \\u2191 (None) | Trace 10\\u2074/Trace 10\\u2075/Trace 10\\u2076 \\u2191 (Adv.) |\\n|-------------------|-------------------------|--------|--------|-------|-----------------------|-----------------------|-------------------------|-------------------------|-----------------------------------------|-----------------------------------------|\\n| SD v1.5 | External | 37.46 | 0.987 | 3.50 | 1.000 | 0.982 | 1.000/1.000 | 0.999/0.995 | 1.000/1.000/1.000 | 0.994/0.992/0.990 |\\n| | Free | 40.58 | 0.983 | 2.40 | 1.000 | 0.988 | 1.000/1.000 | 0.999/0.994 | 1.000/1.000/0.999 | 0.994/0.993/0.991 |\\n| SD v2.1 | External | 36.07 | 0.988 | 4.22 | 1.000 | 0.980 | 1.000/1.000 | 0.995/0.989 | 1.000/1.000/1.000 | 0.985/0.983/0.981 |\\n| | Free | 41.76 | 0.995 | 2.65 | 1.000 | 0.971 | 1.000/1.000 | 1.000/0.994 | 1.000/1.000/1.000 | 0.990/0.987/0.982 |\\n\\nWe have conducted comprehensive experiments under diverse settings, including comparisons with other watermarking methods, training distributions, sample sizes, message bit lengths, sampling methods, guidance scales, inference steps, and pretrained models (detailed in Section E.2). Additionally, we also addressed watermark detection and user identification (discussed in Section E.1). These experiments encompassed various attack scenarios (Section 5.1), validating the efficacy of our method and supporting our theoretical claims. Regarding the strong adversarial attacks, due to constraints in time and computational resources, we have deferred their exploration to future work.\\n\\n---\\n\\n> **Q8:** What dataset did the authors use to train the message processor and extractor? How does this align with the claim of \\u201cno external data\\u201d? \\n\\n**A8:** See A5 for explanation. The training process uses samples from the LDM\\u2019s free generation distribution, without relying on any external datasets.\\n\\n---\\n\\n> **Q9:** Given the potential for the message extractor to output messages from non-watermarked images, why is there no report on FPR? \\n\\n**A9:** See A4 for explanation.\\n\\n---\\n\\n> **Q10:** Does Figure 2 accurately represent the proposed structure? If so, how does a frozen VAE decoder process message embeddings, and what are the implications of only training the message processor and extractor? \\n\\n**A10:** In our case, a frozen VAE decoder means its parameters remain fixed, while the intermediate computations are designed to process the message embeddings. The key aspect of training the information processor and the extractor is that we update only these components, leaving all other model parameters unchanged.\\n\\n---\\n\\n**References:**\\n\\n[1] Fernandez, P., et al. \\\"The stable signature: Rooting watermarks in latent diffusion models.\\\", in CVPR 2023. \\n[2] Xiong, C., et al. \\\"Flexible and Secure Watermarking for Latent Diffusion Model.\\\", in ACM MM 2023. \\n[3] Min, R., et al. \\\"A watermark-conditioned diffusion model for ip protection.\\\", in ECCV 2024. \\n[4] Wen, Y., et al. \\\"Tree-rings watermarks: Invisible fingerprints for diffusion images.\\\", in NeurIPS 2023. \\n[5] An, B., et al. \\\"WAVES: Benchmarking the Robustness of Image Watermarks.\\\", in ICML 2024. \\n[6] Tancik, M., et al. \\\"Stegastamp: Invisible hyperlinks in physical photographs.\\\", in CVPR 2020.\"}",
"{\"title\": \"Thanks for response\", \"comment\": \"The authors addressed my concerns. Yet I agree with reviewer iTqJ that the novelty of the proposed method might be relatively limited. Therefore, I will maintain my score.\"}",
"{\"summary\": \"This paper introduces SAT-LDM, a watermarking method integrated within latent diffusion models (LDMs) to generate watermarked images. The authors argue that SAT-LDM improves generalization across image styles without compromising quality, unlike existing methods which reportedly degrade image content.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper focuses on a significant topic\\u2014embedding watermarks in images generated by diffusion models like Stable Diffusion\\u2014to address the growing need for copyright protection of AI-generated content.\", \"weaknesses\": [\"The paper's **motivation lacks clarity and appears misguided.**\", \"The authors claim that current diffusion-native watermarking methods degrade image quality by comparing watermarked images to unwatermarked counterparts generated from the same prompt. This comparison is irrelevant because users of diffusion-native watermarking methods would not see unwatermarked images beforehand. Such comparisons would only apply to post-watermarking methods, where preserving the original image content is necessary.\", \"Additionally, the claim that watermarking performance is compromised across image styles is questionable, as style diversity is largely dictated by the diffusion model's training data rather than the watermarking process itself.\", \"The paper\\u2019s Figure 2 depicts a VAE decoder as frozen while taking message embedding as input. If the authors follow the FSW structure, as stated in Section 4.2, **the figure is incorrect because FSW fuses message embeddings into the UNet-decoder, not the VAE decoder**. If the figure is correct, freezing the entire diffusion model (including the VAE) while training only the message processor and extractor introduces the risk of extracting false-positive messages from non-watermarked images (i.e., images generated without embedding). Thus, **the paper should include a false positive rate (FPR) analysis to evaluate this aspect**.\", \"The authors claim \\u201cno external data\\u201d usage in Figure 1, but they must have used training data for the two message components, just like the existing methods.\", \"The approach lacks innovation, as it mainly replicates FSW\\u2019s structure and uses the spatial transformer network from StegaStamp to improve robustness.\", \"The experimental evaluation lacks critical ablations and comparisons. The paper should investigate the impact of using different pre-trained diffusion models and evaluate the method under strong attacks, such as diffusion-based attacks gauge robustness under adversarial conditions. WAVES [1] could be a reference in this case.\", \"[1] An, Bang, et al. \\\"WAVES: Benchmarking the Robustness of Image Watermarks.\\\" Forty-first International Conference on Machine Learning.\"], \"questions\": [\"What dataset did the authors use to train the message processor and extractor? How does this align with the claim of \\u201cno external data\\u201d?\", \"Given the potential for the message extractor to output messages from non-watermarked images, why is there no report on FPR?\", \"Does Figure 2 accurately represent the proposed structure? If so, how does a frozen VAE decoder process message embeddings, and what are the implications of only training the message processor and extractor?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"To Reviewer gLwe \\u2014 Part II\", \"comment\": \"> **Q5:** Another critical point is the \\\"guidance\\\" aspect. The authors need to show experimental results with guidance scale = 1, i.e., direct conditional generation (without incorporating an unconditional DM). I am concerned that the real mechanism behind the method's success is that the watermarking modules adapt to the negative output of the unconditional generation. Since in all cases with guidance scale > 1, the final output probability is inversely proportional to the unconditional distribution, this might be the actual reason for the results, rather than the claim that the method accurately reflects the generated distribution of diffusion models.\\n\\n**A5:** Table 8 presents experimental results with a guidance scale of 1, corresponding to direct conditional generation. For the \\\"Free\\\" approach, when the guidance scale is increased from 7.5 to 1\\u2014shifting the test distribution from a mixture of conditional and unconditional distributions to a purely conditional distribution\\u2014the FID increases from 2.4 to 3.75 but remains relatively low. In contrast, for \\\"External\\\" approach, the FID rises from 3.5 to 6.75. This indicates the following: \\n(1) The \\\"Free\\\" approach does not solely rely on the negative output of the unconditional generation but can adapt to different generation conditions. \\n(2) For the \\\"Free\\\" approach, lowering the guidance scale to 1 effectively reduces the diffusion model's dependence on the free/unconditional distribution, resulting in generated images that may slightly deviate from the training distribution. While we assume that the distribution of latent representations generated by all prompts via conditional sampling (the conditional generation distribution) is equivalent to that without a specific prompt, this assumption holds only under the ideal condition of sufficiently diverse prompts. Therefore, the slight increase in FID is acceptable and aligns with our proposed Theorem 1.\\n\\n| Guidance scales | Training distributions | PSNR \\u2191 | SSIM \\u2191 | FID \\u2193 | Bit Accuracy \\u2191 (None) | Bit Accuracy \\u2191 (Adv.) |\\n|------------------|-------------------------|--------|--------|-------|-----------------------|-----------------------|\\n| 1 | External | 36.71 | 0.987 | 6.75 | 1.000 | 0.987 |\\n| | Free | 40.90 | 0.984 | 3.75 | 1.000 | 0.994 |\\n| 7.5 | External | 37.46 | 0.987 | 3.50 | 1.000 | 0.982 |\\n| | Free | 40.58 | 0.983 | 2.40 | 1.000 | 0.988 |\\n\\n---\\n\\n> **Q6:** The paper claims two improvements: training data is shifted from a dataset to generated data, and the model structure is updated in Sec. 4.2. However, it is unclear which of these contributes to the observed improvements without an ablation study. There is a risk that the gains come from the structural update rather than the proposed training data change.\\n\\n**A6:** In fact, the \\\"External\\\" results in Table 1(b) can be viewed as the outcome of structural modifications alone, while the \\\"Free\\\" reflects the additional effect of replacing the distribution. Since the authors of FSW provided only model parameters without the training code, we replicated their results based on their paper and made modifications accordingly. Furthermore, it is important to emphasize that we do not consider structural modifications as our main contribution. Our contribution is more theoretical, and thus we focus primarily on the impact of the training distribution.\\n\\n---\\n\\n> **Q7:** While the intuition\\u2014using generated distribution instead of training data\\u2014is clear, the writing makes this idea more complicated than necessary. While formal theory is important, the explanation could be simplified to better emphasize the intuition, with theory introduced later.\\n\\n**A7:** We appreciate this feedback and have revised Section 4 for clarity in the updated manuscript.\\n\\n---\\n\\n> **Q8:** In LaTeX, it would be better to use ``\\\" for double quotes instead of \\\"\\\". \\n\\n**A8:** Thank you for the suggestion; we have made the necessary revisions.\\n\\n---\\n\\n> **Q9:** Given that the results are measured under GPT-generated prompts, it remains unclear how promising the method truly is. \\n\\n**A9:** See A2 for explanation.\\n\\n---\\n\\n\\n> **Q10:** Additionally, the explanation for why the proposed method works based on the DM mechanism is insufficient. \\n\\n**A10:** See A5 for explanation.\"}",
"{\"title\": \"To Reviewer iTqJ \\u2014 Part II\", \"comment\": \"> **Q4:** If the figure is correct, freezing the entire diffusion model (including the VAE) while training only the message processor and extractor introduces the risk of extracting false-positive messages from non-watermarked images (i.e., images generated without embedding). Thus, the paper should include a false positive rate (FPR) analysis to evaluate this aspect.\\n\\n**A4:** Our experiments primary focus on bit accuracy and image quality metrics. However, as you suggested, we have conducted additional False Positive Rate (FPR) analyses in Section E.1. Following the evaluation protocols of *Tree-Ring*[4], *WAVE*[5], and *WaDiff*[3], we calculate the area under the curve (AUC) of the receiver operating characteristic (ROC) curve and the True Positive Rate at a False Positive Rate of 0.001% (T\\\\@0.001%F) using 1,000 watermarked and 1,000 non-watermarked images. \\n\\n| Training Distributions | None | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Adv. |\\n|-------------------------|------------|------------|------------|------------|------------|------------|------------|------------|------------|\\n| External | 1.000/1.000 | 1.000/0.997 | 1.000/1.000 | 1.000/1.000 | 1.000/1.000 | 1.000/1.000 | 0.995/0.960 | 1.000/1.000 | 0.999/0.995 |\\n| Free | 1.000/1.000 | 0.995/0.958 | 1.000/1.000 | 1.000/1.000 | 1.000/1.000 | 1.000/0.997 | 1.000/0.999 | 1.000/0.999 | 0.999/0.994 |\\n\\nOur \\\"External\\\" and \\\"Free\\\" approaches demonstrate exceptional performance, achieving average AUC and T\\\\@0.001%F values exceeding 99% even under adversarial conditions (Adv.).\\n\\n---\\n\\n> **Q5**: The authors claim \\u201cno external data\\u201d usage in Figure 1, but they must have used training data for the two message components, just like the existing methods.\\n\\n**A5**: We clarify that \\\"no external data\\\" refers to our self-training approach, which utilizes internally generated free distributions (Section 4). In contrast to methods that rely on external datasets (e.g., LAION-400M and COCO), SAT-LDM generates training samples entirely within the LDM\\u2019s operational domain, thereby eliminating reliance on external sources.\\n\\n---\\n\\n> **Q6:** The approach lacks innovation, as it mainly replicates FSW\\u2019s structure and uses the spatial transformer network from StegaStamp to improve robustness. \\n\\n**A6:** It is note that **Our contribution lies in the theory**. While we build on the established methods in *FSW*[2] and *StegaStamp*[6], our primary contributions lie in the theoretical framework (generalization bounds) and practical implementation (self-augmented training), which significantly enhance the quality of watermarked images. Reviewer U5Eu also acknowledged this, stating: \\n> \\u201cAuthors provide a theoretical guarantee on the generalization error of their approach that, to my knowledge, **has not been done in previous works**.\\u201d\"}",
"{\"title\": \"To Reviewer iTqJ \\u2014 Part I\", \"comment\": \"Thank you for your time and thoughtful engagement with our paper! We have addressed your concerns below and revised the paper to incorporate the reviewers' suggestions. Please let us know if you have further questions.\\n\\n---\\n\\n> **Q1:** The authors claim that current diffusion-native watermarking methods degrade image quality by comparing watermarked images to unwatermarked counterparts generated from the same prompt. This comparison is irrelevant because users of diffusion-native watermarking methods would not see unwatermarked images beforehand. Such comparisons would only apply to post-watermarking methods, where preserving the original image content is necessary.\\n\\n**A1:** We respectfully disagree with the critique of the comparison methodology. The comparison between watermarked and non-watermarked images is designed as a diagnostic tool to quantitatively assess the impact of watermark embedding on image quality. It is not intended to imply that end-users would directly compare these two types of images. Instead, this approach establishes a fair baseline for evaluating the effects of watermarking methods on image quality. By measuring changes introduced by watermarking against an unaltered baseline, our methodology ensures a rigorous assessment. This practice is consistent with prior works, such as *Stable Signature*[1], *FSW*[2], and *WaDiff*[3], which also evaluate methods against non-watermarked counterparts. While we acknowledge that end-users may not encounter non-watermarked images, such comparisons are essential for benchmarking and advancing methods within the research community.\\n\\n---\\n\\n> **Q2:** Additionally, the claim that watermarking performance is compromised across image styles is questionable, as style diversity is largely dictated by the diffusion model's training data rather than the watermarking process itself.\\n\\n**A2:** This concern appears to arise from a misunderstanding of the relationship between watermarking and image style diversity. While image style diversity is inherently derived from the diffusion model\\u2019s training data, the ability of the watermarking module to generalize across image styles is determined by its training strategy and the distribution of its training data. It is important to note that watermarking methods trained on external datasets may exhibit biases when encountering rare or previously unseen styles. \\n\\n---\\n\\n> **Q3:** The paper\\u2019s Figure 2 depicts a VAE decoder as frozen while taking message embedding as input. If the authors follow the FSW structure, as stated in Section 4.2, **the figure is incorrect because FSW fuses message embeddings into the UNet-decoder, not the VAE decoder**. \\n\\n**A3:** Regrettably, this statement is factually incorrect. \\nSection 4.3 of the *FSW* [2] paper explicitly states: \\n> \\u201cIn order to achieve the goal of changing embedding message flexibly without training or fine-tuning again, we fuse the message-matrix $m_w$ in **fine-tuned LDM-decoder $D_w$**.\\u201d \\n\\nFurthermore, Section 3.2 specifies: \\n> \\u201cNote that, **the LDM-decoder used in this work is the variational autoencoder (VAE)**, which is widely used in stable diffusion models.\\u201d \\n\\nAs described in the FSW paper, watermark embedding is performed at the VAE decoder stage, which aligns with our implementation. Figure 2 in our paper accurately represents this architecture, where the VAE decoder parameters are kept frozen, and the message embedding is incorporated through a plug-in message processor.\"}",
"{\"summary\": \"The paper observed a mismatch in the target image distribution to be watermarked in diffusion models with previous methods, i.e., external dataset (training phase) vs. generated dataset (testing phase). To address this, the authors propose improving the process by training the watermark using data generated by the diffusion model itself. This enhancement leads to improved performance in the authors' experimental settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The observation that there is a mismatch between the training and testing phases is crucial and can greatly inform future research.\\n2. The theoretical section is informative.\", \"weaknesses\": \"Below are the main concerns:\\n\\n1. The biggest concern stems from the experimental setting. The authors claim that under 1,000 AI-generated prompts, the proposed method surpasses the baseline. However, this number of prompts is far too small for general case measurements, especially considering potential bias in the language models used. Additionally, this comparison seems unfair. The authors should provide a comparison of results using selected prompts from LAION-400M (but not used during training). The reported advantage may solely come from the prompt distribution shift between GPT-generated prompts and those from LAION-400M. Since the proposed method forgoes all prompt information during training (\\u201cusing empty prompt\\u201d), it avoids this shift and may appear better (due to bias from GPT-generated prompts).\\n2. Another crucial drawback lies in the explanation of why the method works. The authors use generated data with an empty prompt and claim that this somehow represents the conditional distribution of diffusion models, showcasing a demo experiment in Sec. 5.3 Training Distributions. However, this demo experiment needs further verification. Fig. 3 can be interpreted as $d(z_{\\\\text{laion-prompt}}, z_{\\\\text{gpt-prompt}}) > d(z_{\\\\text{no-prompt}}, z_{\\\\text{gpt-prompt}})$, which does not provide any evidence that $d(z_{\\\\text{real-prompt}}, z_{\\\\text{no-prompt}})$ is small in most cases. In fact, it is widely believed that there is a large distance between the conditional and unconditional distributions in DMs, which is why we use (either classifier or classifier-free) guidance.\\n3. Another critical point is the \\\"guidance\\\" aspect. The authors need to show experimental results with guidance scale = 1, i.e., direct conditional generation (without incorporating an unconditional DM). I am concerned that the real mechanism behind the method's success is that the watermarking modules adapt to the negative output of the unconditional generation. Since in all cases with guidance scale > 1, the final output probability is inversely proportional to the unconditional distribution, this might be the actual reason for the results, rather than the claim that the method accurately reflects the generated distribution of diffusion models.\\n4. The paper claims two improvements: training data is shifted from a dataset to generated data, and the model structure is updated in Sec. 4.2. However, it is unclear which of these contributes to the observed improvements without an ablation study. There is a risk that the gains come from the structural update rather than the proposed training data change.\\n\\nHere are some additional minor drawbacks, though they do not significantly impact my overall assessment of the paper:\\n\\n1. While the intuition\\u2014using generated distribution instead of training data\\u2014is clear, the writing makes this idea more complicated than necessary. While formal theory is important, the explanation could be simplified to better emphasize the intuition, with theory introduced later.\\n2. In LaTeX, it would be better to use ``\\\" for double quotes instead of \\\"\\\".\\n\\nIn summary, the method might work (under the specific experiment setting) only due to the fact that the prompt distance between the zero-prompt and GPT-generated ones is closer than the distance between LAION-400M prompts and GPT-generated ones. Given that the results are measured under GPT-generated prompts, it remains unclear how promising the method truly is. Additionally, the explanation for why the proposed method works based on the DM mechanism is insufficient.\\n\\nAdmittedly, the idea of using the generated distribution of DMs for watermarking is potentially promising. However, using the unconditional generated distribution seems less meaningful. It might be better to use conditional generated distributions with diverse prompts, such as those from LAION-5B.\", \"questions\": \"The questions are connected to the points above, respectively:\\n\\n1. Can the authors provide results using LAION-400M prompts during testing to ensure the performance improvement is more general?\\n2. Can the authors offer further insights regarding Fig. 3, such as whether there exists a good subset of LAION-400M prompts that generate images similar to those from GPT-generated prompts? From Fig. 3, there appears to be significant overlap between the External and Test regions. If so, this suggests the proposed testing scenarios may be biased, given that the external training set prompts are more diverse.\\n3. Could the authors provide additional results and discussion on guidance?\\n4. Could there be any ablation study, as previously mentioned, to differentiate between the contributions of the structural and training data improvements?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Score remains but confidence decreases\", \"comment\": \"The reviewer's update mostly convince me, but the claim of\\n\\n> the similarity between the composite distribution of all prompts and the free/unconditional distribution\\n\\nis still too strong for me. It may require diverse proof like even the feature for the latent space of the clip in SD to give solid argument. But I am not sure this should be within the scope of the paper as the paper has already provide some proof for this, and current paper's proof may already be enough for a paper about watermark in diffusion models instead of purely exploring properties of dms. So I decide to reduce my confidence to 2 but remain my score to 5.\"}",
"{\"title\": \"Further review about the paper\", \"comment\": \"Thank you for the new results and I believe some of these indeed make the claim in the paper much more solid. I believe most of the concerns are from writing rather than the experiment or method.\\n\\n1. I think it is do proper to include main result in the motivation part in Fig 1. But it would maybe clearer to put aggregated result in Fig 1 and detailed result in experiment part. I do miss these details without the notice of the rebuttal as I'm looking for the result comparison only in Experiment Section.\\n\\n2. About Q3 and Q4, with the additional experiments, I become more confused about the experiment setting. Is the distribution measured by generated images with guidance scale 1 or a larger value such as 7.5 (which is the most common setting for SD generation)? It is only briefly mentioned in L463:\\n\\n> we sampled 1K instances from both the external and free generation distributions.\\n\\nWould guidance scale lead to different distribution visualization results? Indeed, with strong visualization results, it could be accepted that using unconditional generation to replace conditional one. Still, this seems to not be fully achievable.\\n\\nIn fact, lots of research has shown that using classifier guidance, the image quality is even largely influenced, as well as its match with the training prompt considering different guidance scales [1]. (And generation is totally different from unconditional one, where the latter usually shows as blurring or content-less generation.) So it is counter-intuitive that one can directly match free and conditional distributions.\\n\\n[1] Ho J, Salimans T. Classifier-free diffusion guidance[J]. arXiv preprint arXiv:2207.12598, 2022.\\n\\nThe result with guidance scale=1 and ablation study are promising. These concerns are well addressed. \\n\\n\\nAbout the training prompt distribution, as the prompt used for prompting GPT-4 is mannually designed, there (should) exists some ablation study about the prompt also. Though I believe this may partially out of the scope of the paper and would accept current result without such ablation study. But the limitation needs to be stated (to inform further study).\\n\\n\\nOverall, I think the results are pretty good (after some clarification in rebuttal). I hope the author could revise the paper by better claiming the results they want to highlight. I have increased my score to 5 and would appreciate any further discussion about the data distribution discussion, which I believe is truly interesting.\"}"
]
} |
ESM2ixIp3X | Revisiting and Extending Similarity-based Metrics in Summary Factual Consistency Detection | [
"Yuxuan Ye",
"Edwin Simpson",
"Raul Santos-Rodriguez"
] | Cutting-edge abstractive summarisers generate fluent summaries, but the factuality of the generated text is not guaranteed.
Early summary factuality evaluation metrics are usually based on n-gram overlap and embedding similarity, but are reported fail to align with human annotations.
Therefore, many techniques for detecting factual inconsistencies build pipelines around natural language inference (NLI) or question-answering (QA) models with additional supervised learning steps.
In this paper, we revisit similarity-based metrics,
showing that this failure stems from the use of reference texts for comparison and the granularity of the comparison.
We propose a new zero-shot factuality evaluation metric,
Sentence-BERT Score (SBERTScore), which compares sentences between the summary and the source document.
It outperforms widely-used word-word metrics including BERTScore and can compete with existing NLI and QA-based factuality metrics on the benchmark without needing any fine-tuning.
Our experiments indicate that each technique has different strengths, with SBERTScore particularly effective at identifying correct summaries.
Additionally, we demonstrate how a combination of techniques is more effective at detecting various types of error. | [
"Factual Consistency",
"Summarisation"
] | Reject | https://openreview.net/pdf?id=ESM2ixIp3X | https://openreview.net/forum?id=ESM2ixIp3X | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"eZ7a5fIH9A",
"cvdik1x6sE",
"a7bmj25VUK",
"FZPLQczFzh",
"F5TtNoXUsw",
"B1iT1egCd5"
],
"note_type": [
"official_review",
"official_review",
"meta_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1730780007012,
1729241803262,
1734578528579,
1737524146125,
1730645450254,
1730642733120
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11792/Reviewer_2JH5"
],
[
"ICLR.cc/2025/Conference/Submission11792/Reviewer_HCzy"
],
[
"ICLR.cc/2025/Conference/Submission11792/Area_Chair_dgVz"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11792/Reviewer_HxTz"
],
[
"ICLR.cc/2025/Conference/Submission11792/Reviewer_HCQS"
]
],
"structured_content_str": [
"{\"summary\": \"This work proposes to use a general-purpose SBERT (SentenceBert) to evaluate factual consistency in summarization, which does not require task-specific training, and is computationally efficient compared to existing approaches. Specifically, SBERT is firstly utilized to encode the summary and document; different encoding granularities are explored, such as encoding by sentences, by document, or mean pooling. After obtaining the embeddings, cosine-similarity is directly computed between the summary and document as the factuality evaluation. Experiments on AggreFact benchmark (Tang et al., 2023) suggest that the proposed approach outperforms previous NLI-based metrics, though still lags behind previous state-of-the-art.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The method proposed in this work is simple and straightforward, leveraging existing SBERT for the encoding; cosine-similarity computation is also lightweight. Especially, the proposed method surpasses the vanilla NLI-based baseline.\", \"Analysis is conducted to examine the best usage of SBERT (e.g. best encoding granularity) along with its advantages over baselines.\"], \"weaknesses\": [\"Though the proposed method is simple and proven effective, the significance is limited due to the performance gap behind QA-based metrics (and could be also behind other strong NLI-based metrics not included in the experiments). In addition, the performance of the proposed approach is capped by the quality of pretrained SBERT. If SBERT is not largely improved, the proposed method then has little room for improvement.\", \"Only the vanilla NLI-based metric is adopted as the baseline. There are multiple previous NLI-based metrics that are excluded in the evaluation, which have strong performance on the same AggreFact benchmark.\", \"Yin et al. (ACL 2021). DocNLI: A large-scale dataset for document-\", \"level natural language inference.\", \"Utama et al. (NAACL 2022). Falsesum: Generating\", \"document-level NLI examples for recognizing factual inconsistency in summarization.\", \"Zha et al. (ACL 2023). AlignScore: Evaluating factual consistency\", \"with a unified alignment function.\", \"Zha et al. (NeurIPS 2023). Text alignment is an efficient unified model\", \"for massive NLP tasks.\", \"Qiu et al. (ACL 2024). Amrfact: Enhancing summarization factuality evaluation with amr-driven negative samples generation\", \"It would be more complete to include LLM-based metrics for performance comparison and analysis, as there are many recent works focusing on utilizing LLMs for factuality detection in summarization.\", \"Liu et al. (EMNLP 2023). G-eval:\", \"NLG evaluation using gpt-4 with better human alignment.\", \"Chen et al. (2023). Evaluating Factual Consistency of Summaries with Large Language Models.\", \"Wu et al. (2023). Less is More for Long Document Summary Evaluation by LLMs.\", \"Xu et al. (EMNLP 2024). Identifying Factual Inconsistencies in Summaries:\", \"Grounding Model Inference via Task Taxonomy.\"], \"questions\": \"The reported baseline numbers seem different from those in the original AggreFact paper (Tang et al., 2023). How is the balanced accuracy computed for CNNDM and XSUM?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces and evaluates a new metric, SBERTScore for assessing the factual consistency of abstractive summaries. SBERTScore is a sentence-level bertscore. The authors highlight that the reason that previous BERTScore, often fails than other metrics in this task is due to (1) relying on reference texts, and (2) focus on similarity at a word level. Thus, SBERTScore uses sentence embeddings and directly compares the generated summary to the source document at sentence level. SBERTScore outperforms BERTScore and competes with other NLI-based and QA-based metrics in Aggrefact. Additionally, the author found combining different metrics can improve detection of diverse types of factual errors.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-structured, with clear and objective writing,\", \"The authors thoroughly explore various settings for using sentence-level BERTScore in the context of summary factual consistency detection,\", \"The idea of combining different metrics is promising\"], \"weaknesses\": [\"Using sentence embedding of paired texts to assess their semantic similarity is not entirely novel. Similar approaches have employed in various nlp tasks such as machine translation, paraphrasing, and also summarisation. The key difference lies in the context of comparison: when comparing generated text with a reference, it evaluates informativeness, whereas when comparing with the source document, it evaluates faithfulness.\", \"The size of the models used in the comparison are inconsistent, potentially affecting the fairness of the evaluation. For instance, SummaC utilizes DeBERTaV3-large (approximately the size of bert-base), QA metrics use T5-large, while BERTScore uses RoBERTa-large.\", \"It is unclear whether the authors used the default settings of the summac package or implemented a custom version. Summac package uses an entailment-minus-contradiction score for zs and entailment score for the conv. In fact, using only the entailment score for both variants can lead to better performance. I obtained over 70 ROC-AUC on the XSum split of Aggrefact using summac zs.\"], \"questions\": [\"Which package did you use to segment the document into sentences?\", \"Section 5.4 is particularly interesting. NLI models are also sometimes fooled by similar lexical overlap. For example, NLI model may think the following premise entails the hypothesis:\"], \"premise\": \"\\\"The actor was encouraged by the lawyer.\\\"\", \"hypothesis\": \"\\\"The actor encouraged the lawyer.\\\"\\n\\nDid you observe similar trends with SBERTScore, where lexical overlap causes misjudgments?\\n\\n* For Tables 3 and 4, were the experiments conducted on the validation set or the test set of Aggrefact?\\n\\n* For Table 4, the results suggest that segmenting documents and summaries into sentences yields the best performance, while the \\\"mean method\\\" (i.e., averaging the embeddings of sentences) leads to worse performance. Does this conclusion hold for other datasets as well, or is it specific to AggreFact?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The paper introduces SBERT, a sentence-level variant of BERTScore, to evaluate the factuality of generated summaries. Reviewers agree that the paper is missing some important baselines (e.g. NLI metrics like AlignScore, FalseSum, others like AMRFact, etc.) The experiments are conducted on the Aggrefact dataset which only includes summaries from pre-GPT3 language models. The paper should include additional experiments -- more baselines and datasets containing summaries from recent models -- to demonstrate the strength their approach.\", \"additional_comments_on_reviewer_discussion\": \"No rebuttal posted\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"Authors propose SBERTScore, a zero-shot, similarity-based metric using sentence-level embeddings for evaluating factual consistency in summarization. The authors show that token-level similarity-based metrics, such as BERTScore, have inadequate granularity for comparing factuality. Therefore they propose to compare summary-source sentence embeddings for evaluating consistency. Empirical results demonstrate that SBERTScore outperforms BERTScore and also competes with established metrics like NLI and QA-based models without additional training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Authors discuss limitations of token-level similarity based BERTScore in evaluating factuality and proposes SBERTScore that compares sentence-level embedding and shows superior performance.\\n\\nSBERTScore is efficient as it requires calculating sentence embedding once and is faster than NLI- or QA-based metrics.\", \"weaknesses\": \"The method may seem somewhat outdated given the field's shift towards using LLMs as general purpose evaluator for factual consistency.\\n\\nIt is unclear what aspect of factuality does the SBERTScore capture better than other metrics. While the results suggest SBERTScore has some strengths, it is ambiguous exactly where and why we should use it?\", \"questions\": [\"SBERTScore neither on its own, nor in combination with other metrics, give the best performance. For instance, if we look at Figure 1, QAFactEval alone performs better than SBERTScore + any other metric. Also, there has been a recent shift towards LLM Judge. What do you see as the practical applications of SBERTScore?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper revisits the efficacy of BERTScore and SBERTScore for evaluating summary factual consistency. Specifically, it demonstrates that if the text used for comparison is changed from reference summaries to source documents, their accuracy will substantially increase. Then, the authors also show that SBERTScore on sentence-sentence level outperforms metrics of other granularity settings. Moreover, experimental results exhibit that BERTScore and SBERTScore achieve the second best accuracy, always worse than NLI-based metrics or QA-based metrics. Finally, the paper discovers that using AND to combine the results of two metrics is more likely to obtain a higher accuracy than relying on a single metric.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper discusses a potential misuse of similarity-based evaluation metrics for evaluating summary faithfulness. This is beneficial to the community as many researchers are aware of this.\", \"As pointed out in Section 3.1, similarity-based metrics are highly efficent than other types of metrics. The efficiency analysis is meaningful and rarely seen in prior studies.\"], \"weaknesses\": \"- Although it is great to show that the performance of BERTScore and SBERTScore goes up a lot after changing reference texts to source documents, they still lag behind QA-based metrics or NLI-based metrics. Moreover, some newer and better evaluation methods are not compared in this study, such as AlignScore[1], AMRFact[2], and LLM-based evaluation metrics. Considering the fact that QA-based metrics and NLI-based metrics are already suboptimal, the advantage of similarity-based metrics is only from efficiency.\\n- It seems hard for this paper to balance two goals: emphasizing the advantages of similarity-based metrics and re-evaluating automatic evaluation metrics for summary faithfulness. For example, Section 5.4 fully belongs to the former while Section 5.6 especially Figure 1 almost corresponds to the latter.\\n\\n[1] [AlignScore: Evaluating Factual Consistency with A Unified Alignment Function](https://aclanthology.org/2023.acl-long.634) (Zha et al., ACL 2023)\\n\\n[2] [AMRFact: Enhancing Summarization Factuality Evaluation with AMR-Driven Negative Samples Generation](https://aclanthology.org/2024.naacl-long.33) (Qiu et al., NAACL 2024)\", \"questions\": [\"A missing dot in line 347\", \"As mentioned in Weakness, I would suggest the authors focus on one objective. If the aim is to propose a similarity-based metric, it may be better to further improve the efficacy. Besides, other automatic evaluation metrics (especially the latest ones) for summary faithfulness should be considered.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
ES9uz5Qa5W | GPT Shortcuts: Learning Iterative Text Generation Patterns from a Dialogue | [
"Hyungyu Shin",
"Yoonjoo Lee",
"Kwon Ko",
"Yumin Cho",
"Jinho Son",
"Sangwoo Mo",
"Juho Kim"
] | LLM-powered conversational interfaces (e.g., ChatGPT, Claude, and Gemini) support iterative text generation, enabling users to easily generate tailored texts (e.g., texts that should address domain-specific constraints) through a series of follow-up text editing requests. However, generating such tailored texts that address the user-specified constraints across multiple different contexts requires repetitive text generation efforts, which is cumbersome, inefficient, and demanding. To address this challenge, we introduce the concept of *GPT shortcuts*, which is designed to 1) learn iterative text generation patterns from a dialogue and 2) apply these learned patterns to *directly* generate the tailored text. GPT shortcuts generate texts that address necessary constraints while maintaining similar structural appearance to the target text in the dialogue, across different contexts. To assess the capability of language models in generating GPT shortcuts, we present ShortcutBench, a benchmark consisting of 250 crowdsourced iterative text generation dialogues across five text generation tasks. Using ShortcutBench, we conducted an analysis using six LLMs and four prompting methods, varying ways to specify necessary constraints to address in the prompt. We found that 1) larger models generally outperform smaller models, 2) self-explanatory constraints within the target text are effective, and 3) precisely specifying necessary constraints to address is critical for improving the performance. | [
"Large Language Models",
"Shortcuts",
"Iterative text generations",
"Reusable functions",
"Conversational AI"
] | https://openreview.net/pdf?id=ES9uz5Qa5W | https://openreview.net/forum?id=ES9uz5Qa5W | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wkC7luR6Pc",
"heJVjpqRL5",
"f8izvuY34i",
"f7eNfjFMLM",
"WBcy3v8YyB",
"3IyPn3nE5O",
"1nq6APrHWv"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_review",
"comment"
],
"note_created": [
1730656550624,
1731049780452,
1729071706177,
1729361314265,
1732023345142,
1730696296311,
1732023363973
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5574/Reviewer_Bzy5"
],
[
"ICLR.cc/2025/Conference/Submission5574/Reviewer_EJna"
],
[
"ICLR.cc/2025/Conference/Submission5574/Reviewer_UA7n"
],
[
"ICLR.cc/2025/Conference/Submission5574/Reviewer_cTde"
],
[
"ICLR.cc/2025/Conference/Submission5574/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5574/Reviewer_n5Eo"
],
[
"ICLR.cc/2025/Conference/Submission5574/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"The paper describes work on learning iterative patterns of user constraints made with interactions from chat interfaces to produce outputs that align with said patterns. The author/s motivate the task by stating that users may impose constraints that are not immediately obvious and can be based from timesteps of interacting with the chat interfaces. The author/s built a small benchmark named ShortcutBench for this task by crowdsourcing chat interactions on selected tasks including summarization and story generation. The author/s evaluated six LLMs (mostly from the Llama and GPT family) for the benchmark and used two evaluation to measure matches of target text to generated text: SBcon measures constraints addressed via a checklist and GPT4o judge and SBapp measures closeness of format from target text by user and generated text by GPT shortcut. Results showed that providing more explicit constraints allow the models to provided expected target results. However, my major concern here is that user constraints are dependent and made when on-the-fly and not outright. Thus, the way the LLMs are being evaluated with the constraints being given outright may not capture the true complexity of the task. Overall, the paper also needs major clarification, additional experiments and increasing the size of the benchmark, and stronger evaluation for the results to be reliable.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The task that the paper proposes, learning the patterns of iterative changes from a series of chat interface interactions, is interesting and indeed has practical approaches and realistic use. I agree that the emergence of user constraints on-the-fly to arrive at is a one concept that should be modeled from interactions. The paper overall is fairly easy to read to understand what the author/s are trying to achieve.\", \"weaknesses\": \"I have several concerns with the technicality and evaluation of the paper which I list down as follows:\\n\\n1. Mismatch of expectations in motivation vs actual experiments \\u2192 The paper is motivated by the fact that interactions of users to chat interfaces (ChatGPT, Llama, etc) span across domains such as writing a clinical report following specific diagnostic rules in health as an example. However, these types of interaction require experts as users of the chat interfaces and not just regular humans. Thus, I find some mismatch between examples to motivate the problem vs. what was actually collected and tested in ShortcutBench. In ShortCutBench, the interactions are quite short (averages 3-5 turns) which might not be the case for domain-specific interactions. It seems to me that the test cases for ShortCutBench are more controlled and boxed to specific tasks (e.g., summarization) than what we would observe from domains such as in health (nurses generating clinical reports), education (teachers generating content for classroom reading delivery), etc. Thus, reframing of expectations in both introduction and motivation is necessary to reduce mismatch.\\n\\n2. Small benchmark \\u2192 I think 250 test instances for the benchmark is quite small and need more instances for performance of the benchmark to converge. The author/s can also add splits like what was extracted from WildChats and put it in the benchmark itself as well as other works that mined chat interactions. Likewise, the paper needs to be thorough with the information covered by the benchmark particularly distribution of topic, tasks, user information, and granularity of its test instances.\\n\\n3. Presentation of task and setup \\u2192 Text generation methods in Section 5.1 need to be visualized for better understanding of differences and characteristics of the constraints presented to prompts. Moreover, some paragraphs are not at all clearly discussed. For example, \\\"We put the ground-truth constraints in the checklists in the prompt, expecting to yield the ceiling performance of the other methods as it clearly informs correct constraints without any confusion\\u201d in what form? In what manner? Are these just appended in the prompt? How long is this additional information? In the appendix, only the template is shown.\\n\\n4. Need for better framing of results \\u2192 The results in Table 2 and 3 look very closed to each other by just a hair of points. It might be better to conduct statistical tests to ensure which one or two methods of prompting are actually informative given that you already have the ground truth constraints prompting as the ceiling. Moreover, I would not consider the ground truth constraints to attribute to the performances of the LLMs since the constraints are formed during iterative dialogue with the LLM and not something that is available outright.\\n\\n5. Need more baseline and specialized models to evaluate \\u2192 Aside from the benchmark being small, the paper could be strengthen by evaluating diverse LLMs both with optimized with general chat capabilities and task-category specialized models like CoEDIT (https://arxiv.org/abs/2305.09857) for rewriting texts which may capture to user constraints.\\n\\n6. Lack of human evaluation \\u2192 GPT4o judging related GPT model performance may induce some bias as with any LLM-to-LLM type of judging/evaluation. From this, I would strongly suggest having human evaluation of the same prompts that the GPT4o judge evaluated using a Likert-style metric in parallel with SBapp. This way, there is a stronger picture of the selected LLMs performance for the task.\", \"questions\": \"Templates aside, what form are the different types of user constraints added to the different prompting styles?\\nAlso, please address questions from weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper studies the problem of adhering LLMs to domain specific requirements without respecifying the constraints. In this context some LLMs prompts are studied and analyzed. A dataset/ benchmark of such traces is provided which could be useful to the community.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The problem is interesting and has practical value\", \"The dataset traces are very interesting\", \"The dataset itself is small but such traces are valuable (can I look at the dataset)\"], \"weaknesses\": \"**Missing related work**: Given the vast literature/ work in LLM space, it is easy to miss related work, which could be now considered. Some examples:\\n\\n- Memory augmented LMs [1, 4] where a memory can store user preferences and past feedback to align with this in the future. These class of methods work at inference time and do not update any model parameters, similar to this paper.\\n- Custom generation LMs e.g., [2] tailors how-to procedures to a user based on a dialogue. Given a goal and an uncustomized procedure P and a user's customization constraint H, they generate P\\u2032, a customized procedure that accounts for H. These H are quite similar in spirit to the constraints defined in the current paper under review. A benchmark over this dataset is available and might be valuable to compare in the current work. The techniques (GPT based, zero-shot and pipelines) share similarity to the ones described in Section 5 of the current paper. The evaluation metric (edit based MSED) also finds support in this paper.\\n- User assistant LMs e.g., OpenAI APIs [3] have a \\\"GPT assistant\\\" to create an agent instance to follow particular styles (similar to the motivation in this paper). \\n\\n\\n**Limited technical depth** e.g., in the \\\"methodology\\\" section there are essentially four prompts while ideas much of the related work progress can be used. I feel that the solution offered could be made much richer with a user memory that stores preferences for different tasks and how to structure/ represent and utliize this memory of traces is a research question. Secondly how the preference transfer happens between related tasks is the second research question. For example, given a user's preference in task A which is implicit in the revision trace, can we align to that preference for taskA in future inferences; or for task A' (~A) if that preference can be transferred. \\n\\n**Results**: The findings are good, but are not surprising: larger models can adhere to constraints; and when constraints are specified precisely it improves performance ([4] which had similar findings on understanding user feedback for iterative text generation). I also wanted to understand the kind of errors that the model makes in a bit more detail, but error analysis is missing. The proposed metrics are interesting, but SB_{app} that resembles structural similarity of output and target text is a bit complex as a metric and not explained well. I support the intuition of this metric as other papers [2] have highlighted that refinement of a structure could drift too much from the original structure. I also wonder why an LLM-as-an-evaluator is not tried (or atleast as a second evaluator). \\n\\n**Other minor points**: The prompts in the appendix B are a bit abstract. I was looking for one complete example. I tried but could not find the code and the dataset but could not find it.\", \"references\": \"1. Memory-assisted prompt editing to improve GPT-3 after deployment ; EMNLP 2022; https://aclanthology.org/2022.emnlp-main.183/ ; (_augments LLM with a memory to store user preferences and past feedback or user preference_) \\n2. Tailoring with Targeted Precision: Edit-Based Agents for Open-Domain Procedure Customization ; ACL 2024; https://aclanthology.org/2024.findings-acl.921.pdf (_custom how-to procedure generation with LLMs_)\\n3. GPT \\\"assistants\\\" ; https://help.openai.com/en/articles/8673914-gpts-vs-assistants (_in-context learning to personalize an agent_)\\n4. Self-Refine: Iterative Refinement with Self-Feedback ; NeurIPS 2023 (_LLM self-refinement through fine-grained feedback generated_)\", \"questions\": \"Please go through the weaknesses and clarify if there is something that I am missing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes the concept of GPT Shortcuts, which learn iterative generation patterns through dialogues and directly generate tailored text. Furthermore, the authors introduce a new benchmark, ShortcutBench, created through crowdsourcing and consisting of dialogues across five NLP tasks. They validate GPT Shortcuts on six LLMs using four different prompting techniques, finding that larger LLMs perform better than smaller ones and that specifying constraints is crucial for improving performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Paper is well written and quite easy to follow\", \"Authors proposes GPT Shortcut which can generate tailored output without having multi-turn dialogues\", \"A new benchmark ShortcutBench is created, which might be useful for future investigation in this area\"], \"weaknesses\": [\"Author do not provide information about crowdsourcing, e.g. #participants, their background. Also lack of statistic, e.g. percentage of coverage/generalizability\", \"I don't fully get your motivation, because I think in most cases, text generation collaborated with human is necessary. In such a way, human deliver their thoughts \\\"step by step\\\". The final output is heavily dependent on users' preference. In this case, how could GPT shortcut help?\"], \"questions\": [\"l.139: In Figure 5, gpt-4o, gpt-4o-mini, Llama3.1-70B and Llama3.1-405B have almost same tendency, while gpt3.5 and Llama3.1-8B have similar tendency. But gpt3.5 should be even larger than Llama3.1-70B. How could you conclude that smaller LLMs underperforms larger LLMs (l.26)\", \"l.256: You mentioned that you only control the coverage and generalizability of generated checklists. How could make sure that the checklist are correct and not hallucinated?\", \"l.263: you use the same LLM (gpt-4o) for generation and judgment. I wonder if LLM itself can find errors from its generated text. (https://arxiv.org/abs/2310.01798 ; https://aclanthology.org/2024.findings-acl.826/)\", \"l.306: Is $SB_{app}$ really sufficient to measure the structure similarity? Or $SB_{app}$ even not necessary, as soon as LLMs can provide outputs with similar content\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes a new task to extract a pattern, a so-called shortcut, from example dialogues that specify the constraints to generate a target text. The LLM needs to derive the shortcut as an abstract pattern to be applied to other constraints in test dialogues to generate a target text. The authors created a benchmark dataset for the task and tested various LLMs and different prompts. They evaluate the LLMs with the prompts in two ways: 1) the coverage of the constraints that need to be represented in the target text, 2) the structural similarity between the generated text and the reference text. The authors discuss the performance in relation to the different models and the different prompts.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"presents a new challenge for shortcuts that potentially make prompt writing easier for non-experts\", \"created a benchmark data set for the task that contains more complex dialogues than comparable datasets\", \"provided results and analysis of a number of LLMs and the different prompts\", \"give insights in what prompts work and what LLMs\"], \"weaknesses\": [\"observations from the results are rather generic, e.g. big models better than small models or that llama3.1-1B performs well for story writing and GPT generates too short stories\", \"the paper can be improved by a deeper discussion on what goes well and in which cases the models+prompts have difficulty. Central is the capability to create a shortcut that is sufficiently abstract and still semantically effective. So what are the generalizations that need to be made and when do the LLMs fail and when can they do this correctly. In the text you give one example of such a generalization: we revised a constraint \\u201cmake DVDs antique\\u201dinto\\u201cmake a commonly used item in the past become an antique\\u201d \\u2014> more insight in how many of these cases in the data set, how abstract and how specific is the constraint in the target? Having insight in this in relation to what goes well and which cases all LLMs struggle would make the paper a lot stronger\", \"you use edit distance as a method for the structural match. Why not standard text generation measures such as BLUE, ROUGE, METEOR, BertScore?\", \"not clear what sequence of line length is and why this is a good measure for structural match\", \"GPT-o is used to evaluate the coverage of constraints but how well does this work? You only mention you take constraints mentioned 7 out of 10 times, but what constraints are these and which ones are rejected?\"], \"questions\": [\"Line 325: \\u201cFinally, SBconv\\u201d \\u2014> Finally, SBapp\", \"explain in more detail what sequence of line length is and why this is a good way to evaluate structural similarity\", \"you already give detailed results in the introduction, line 131-147, while a lot of details still need to be explained. This should come later with the result sections.\", \"Many limitations are actually future work suggestions.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for reviewing our paper\", \"comment\": \"We appreciate the reviewers for the constructive feedback. We will reflect the suggestions and submit the paper to a future venue.\"}",
"{\"summary\": \"The paper formalizes the task of learning to generate text that follows certain patterns given examples of those patterns (and optionally natural language descriptions of the patterns). Identifying these patterns can allow a model to adhere to a user's constraints on attributes of the output without the user having to fully specify these attributes for every query. To evaluate the performance of models on this task, the paper presents a new dataset and evaluation framework, and benchmarks a number of large language models on the task.\\n\\nThe task is formalized as learning to produce an output text $y$ given some input text $x$. This amounts to inferring some transformation function $f(x, y)$, and the evidence for inferring this transformation in this context is the a given example $(x', y')$. This function is elicited from people through dialogue $d$. So, the task amounts to generating $f_d(x)$ given some dialogue $d$ that describes the transformation of some example input $x'$ to its corresponding output $y'$. \\n\\nThe dataset is collected by sampling 5 different input texts $x'$ for each of 5 text generation tasks, having 10 crowdworkers interact with a system to edit text according to their preferences for each $x'$, resulting in $5 \\\\times 5 \\\\times 10 = 250$ iterative text generation dialogues, each of which specifies a unique $f_d$. For evaluation, additional inputs $x_{test}$ are chosen, and a set of constraints is automatically inferred from the dialogue $d$. An LLM-as-a-judge approach is used to evaluate whether outputs on $x_{test}$ adhere to the automatically extracted constraints. An additional edit-distance-based appearance criterion is also used in evaluation.\\n\\nThe paper presents results from a number of different models in the GPT and Llama families, generally finding high constraint satisfaction performance (averages close to or above 90% of the identified constraints being met).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The data collection protocol does not impose constraints, and instead appears to allow crowdworkers to specify constraints that they would like to see in the outputs, which does allow complex and nuanced preferences to be surfaced in interactions between users and a text generation model.\", \"The choice of tasks for data collection is carefully considered and covers a variety of text generation types.\"], \"weaknesses\": [\"The organization and presentation of the paper is in general quite confusing, making many things hard to understand.\", \"The introduction goes into the 4th page, with the list of contributions appearing only at the top of the 4th page, which is quite unusual (the number of pages isn't an issue, details follow)\", \"There is a substantial discussion of the findings in the introduction, at a place where a lot of the concepts and terminology is not adequately introduced.\", \"The idea of \\\"constraints\\\" is an important one through the paper, used for evaluation and to determine an \\\"oracle\\\" setting for model evaluation. However, there are no examples of these in the paper, making it difficult to understand the evaluation criteria. Are the annotations in Figure 1a (\\\"Give me a tomato pasta recipe\\\", \\\"I need one portion.\", \"Also, I have only one pan\\\", etc.) constraints/input the model inferring constraints? It appears that the constraints are extracted by GPT-4o but what these look like might vary and it isn't clear what these constraints might look like.\", \"It appears that models already do very well at this task (most achieve over 90% constraint satisfaction). This raises the question of the intended role of the dataset. However, there are also other aspects of the benchmark construction that might be contributing to this.\", \"404: \\\"As a possible reason, we observed that the example output is often self-explanatory. For example, LLM responses tend to contain an introductory statement that explains the text to generate, such as \\u201cSure, here are the key points you should remember for your test!\\u201d, which clearly describes the necessary constraints.\\\"\", \"This suggests that the responses produced by the LLM are used in entirety, including any repetition of the constraints specified in the interaction. In essence, this does the job of inferring the constraints (the primary challenge in this task) for the model (as the authors note in this quote).\", \"Additionally, the constraints used for evaluation are those which are identified to be met by the target text (255: \\\"It is important to note that not all the user-specified constraints in the dialogue have been addressed in the target text.\\\"; 298: \\\"necessary constraints refer to a set of user-specified constraints that **have been addressed** in the target texts\\\") that a user generates in interaction with an LLM (as an aside, it's not very clear what LLM is used to obtain the ground truth answers here either). So, we have a situation where an LLM is given some instructions by a user and follows some (and possibly not all) of these instructions. For evaluation, only the instructions that the LLM does follow are retained. This skews the task in favor of the LLM, since only constraints that have already been proven to be satisfied by an LLM are retained for evaluation. This might be a reason for the high performance across the board.\", \"If this understanding is not accurate, it highlights the need for clearer presentation of the method.\", \"408: \\\"However, OneShot significantly underperformed compared to other methods when the target text did not clearly imply the necessary constraints. For instance, the gap of SBcon scores between OneShot and +GTC was 4.5 times larger on average when the dialogue contained more than four constraints compared to fewer (Table 3).\\\"\", \"The text says \\\"for instance\\\", but it isn't clear how instances of the second type (which have >4 constraints) are related to those of the first type (that did not clearly imply the necessary constraints).\", \"Scrubbing out explicit mentions of the constraints, or evaluating the drop in performance for instances where the constraints are not specified might reveal the task to be far more challenging that the numbers reported in the paper show.\", \"The appearance-based metric doesn't seem to be well-motivated. It mostly measures the degree to which the produced response adheres to the bullet point-like structure shown in the example. However, why this should be weighed equally with constraint satisfaction (which considers the more important question of what should be in the response) is not clear or well-argued.\", \"Additionally, it isn't clear what is used as the target when computing this metric for held-out samples, since it's unclear whether ground-truth responses are collected for those.\"], \"questions\": [\"Why the name \\\"GPT shortcuts\\\"? The idea of reusable sets of constraints/preferences for text generation could go beyond GPT models.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}"
]
} |
|
ERv8ptegFi | GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS | [
"Saman Kazemkhani",
"Aarav Pandya",
"Daphne Cornelisse",
"Brennan Shacklett",
"Eugene Vinitsky"
] | Multi-agent learning algorithms have been successful at generating superhuman planning in various games but have had limited impact on the design of deployed multi-agent planners. A key bottleneck in applying these techniques to multi-agent planning is that they require billions of steps of experience. To enable the study of multi-agent planning at scale, we present GPUDrive, a GPU-accelerated, multi-agent simulator built on top of the Madrona Game Engine capable of generating over a million simulation steps per second. Observation, reward, and dynamics functions are written directly in C++, allowing users to define complex, heterogeneous agent behaviors that are lowered to high-performance CUDA. Despite these low-level optimizations, GPUDrive is fully accessible through Python, offering a seamless and efficient workflow for multi-agent, closed-loop simulation. Using GPUDrive, we train reinforcement learning agents on the Waymo Open Motion Dataset, achieving efficient goal-reaching in minutes and scaling to thousands of scenarios in hours. We open-source the code and pre-trained agents at \url{www.github.com/Emerge-Lab/gpudrive}. | [
"Simulation",
"benchmark",
"multi-agent reinforcement learning",
"autonomous vehicles",
"planning"
] | Accept (Poster) | https://openreview.net/pdf?id=ERv8ptegFi | https://openreview.net/forum?id=ERv8ptegFi | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zF4rul8moP",
"p6Rg8kJNVE",
"p6GQmGmIAS",
"nZRunJ9KI1",
"lgpouiCW9G",
"h42vyF4oOW",
"fnsMDazfyC",
"ZK3vdUJ2iN",
"UUrq9YzXiS",
"U9wFYaF0pm",
"Rcx4ZGNY9m",
"NiXBu5NnXb",
"MiEk5wzPn7",
"Hcv0iNh9dA",
"Gvlsgx9aeL",
"DPzEhdcaLf"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1731633004074,
1737523872391,
1731633175735,
1730722835317,
1730698747049,
1731808772644,
1730582174695,
1734536162694,
1732699765791,
1731633317974,
1731771302663,
1732942287183,
1731632898808,
1732215207038,
1730041683221,
1731633067882
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7887/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7887/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7887/Reviewer_AUkE"
],
[
"ICLR.cc/2025/Conference/Submission7887/Reviewer_Y9u7"
],
[
"ICLR.cc/2025/Conference/Submission7887/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7887/Reviewer_geDV"
],
[
"ICLR.cc/2025/Conference/Submission7887/Area_Chair_PuDM"
],
[
"ICLR.cc/2025/Conference/Submission7887/Reviewer_AUkE"
],
[
"ICLR.cc/2025/Conference/Submission7887/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7887/Reviewer_dLhs"
],
[
"ICLR.cc/2025/Conference/Submission7887/Reviewer_Y9u7"
],
[
"ICLR.cc/2025/Conference/Submission7887/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7887/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7887/Reviewer_dLhs"
],
[
"ICLR.cc/2025/Conference/Submission7887/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Main response\", \"comment\": \"We would like to thank all the reviewers for their time and constructive feedback. We address each comment below and highlight the $\\\\color{green}{\\\\text{feedback that is directly incorporated in green}}$ and items we are $\\\\color{orange}{\\\\text{actively working on in orange}}$.\\n\\nWe hope that the reviewers feel we have addressed their questions and welcome further discussion.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response to reviewer geDV\", \"comment\": \"Dear reviewer\\n\\nThank you for your thoughtful comments and for noticing GPUDrive is built on top of a real-world driving dataset (WOMD), which enhances sim realism. Please find our responses below:\\n\\n### Complex behaviors\\n\\n> _While the focus of this paper is on providing a novel simulator, it would be very interesting to see some more complex behavior over longer time-horizons to fully capture the capabilities unlocked by the simulator (e.g., training a single agent policy with higher velocity limit to weave through a simulated traffic scene, etc.)_\\n\\nWe think this is a great suggestion and an interesting direction for future work. As mentioned, the key contribution of the paper is the simulator itself; we look forward to users or future papers showing these behaviors!\\n\\n> _... showcasing such behavior would likely require addressing the \\u201cAbsence of a map\\u201d limitation raised in the paper, in order to formulate a more sophisticated reward function. An important question would then be how easily this could be integrated, and how much the absence of such a feature could hurt adaptation of the simulator._\\n\\nThe reviewer correctly points out that some features are challenging to implement without a map. While the current simulator task of reaching goals without collisions can be achieved without an explicit map, more complex scenarios would benefit from one.\\n\\nThat said, **we do incorporate map-like features, such as lane lines**, which assist with imitation learning and allow for reward functions like staying lane-centered. Full map connectivity is available in the dataset, and we are actively working to $\\\\color{orange}{\\\\text{integrate full map connectivity into the simulator}}$.\\n\\n### References\\n\\n> _The discussion of batched simulators could be extended to include references [1-3], where [1] has driven many results in single-agent robot learning, while [3] considers heterogeneous multi-agent settings_\\n\\nThank you for pointing this out! $\\\\color{green}{\\\\text{We integrated these references in the discussion of batched simulators.}}$\\n\\n### Typos\\n\\nThank you for catching these!\\n> _Figure 3 mentions performance on an RTX 4080, while line 711 states RTX 8000_\\n\\n$\\\\color{green}{\\\\text{We fixed that typo, it should be RTX 4080}}$.\\n\\n> _Line 308: \\u201cnumber valid number\\u201d_\\n\\n$\\\\color{green}{\\\\text{Noted and fixed.}}$\\n\\n### Questions\\n\\n> _How easily can the simulator be updated to efficiently provide map-like utilities that allow for lane-keeping rewards (re mentioned limitations)?_\\n\\nSome required features are already present in the data structure; they can be implemented by adding a reward for staying near lane lines. However, for a more robust implementation of this reward, it would likely be necessary to support maps.\\n\\n> _Do you support loading multiple different policies for individual agents? Could they have different sampling rates? How would these aspects affect efficiency?_\\n\\nWe do support loading multiple policies per agent, and they could have different sampling rates as long as they are integer multiples of the simulator time-step. Since agents not taking action at a step would simply be stepped, this would only increase the speed of the simulator step. \\n\\n> _How do traffic jams affect throughput (re BVH)? This could be an interesting experiment to add._\\n\\nThank you for the suggestion! We currently do not have an answer, but we agree that studying the impact of traffic jams on throughput (in relation to BVH) would be an interesting avenue for future research.\\n\\n> _In video scene_53.mp4, agent 4 displays rather jerky behavior when moving towards its goal - could you elaborate on the underlying reasons?_\\n\\nGreat observation and thank you for checking out the videos! These agents are trained solely with a goal-reaching reward; they have no incentive _not_ to drive jerkily. If we added jerk penalties (which can easily be done), this would probably disappear. \\n\\n> In video scene_43.mp4, agents 1 and 10 seemingly disappear without reaching their goals - could you elaborate on this behavior?\\n\\nYes, we configured the simulator to remove vehicles from the scene when they collide with a road edge or another agent, although this behavior can be easily adjusted in the config file. The video shows that the pre-trained policy, which achieved 95% performance over 1000 scenes (colliding 5% of the time), still leaves room for improvement. We are actively working on $\\\\color{orange}{\\\\text{improving both the effectiveness and diversity of the simulation agents}}$.\\n\\n### Conclusion\\n\\nThank you for taking the time to provide such a thorough review! We believe we have addressed all of your questions and welcome further discussion. If all your concerns have been resolved, we would be thankful if you could consider increasing your support for our paper.\"}",
"{\"summary\": \"The paper presents a GPU accelerated simulator that can generate millions of simulation steps samples per second that can be used to train multi-agent reinforcement learning (RL) algorithms. The simulator is claimed to simulate hundreds to thousands of scenarios/scenes in parallel with each scene containing thousands of agents.\\n\\nThe simulator is built on top of the Madrona Game Engine and is written in C++. The C++ simulator engine can also be interfaced with learning environments written in JAX and Torch.\\nThe authors have released implementations of RL algorithms capable of processing millions of agent steps per second and some baseline agents trained on these algorithms that achieve 95% of their goals. The simulator claims to provide both recorded logs and RL agents for the environment.\\n\\nThe authors introduced certain metrics to evaluate the simulation speed of GPUDrive in terms of agent steps per second (ASPS), controllable agent steps per second (CASPS) and scene completion time. Compared against other sim engines like Nocturne GPUDrive achieved 25-40x training speedup solving 10 scenarios in less than 15 minutes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed simulator has the flexibility to handle multiple modalities of sensor data.\", \"The authors have implemented ways to reduce the memory footprint due to the large number of agents and observation space using algorithms like Bounding Volume Hierarchy (to exclude certain agent pairs for collision checking) and polyline decimation to approximate the straight polylines.\", \"The trained agents are claimed to be useful for out-of-distribution tests for the driving agents.\", \"The authors presented the different simulator features in a comprehensive way.\", \"The paper shows that the simulator gets the scaling benefits in terms of increased amortized sample efficiency with increasing dataset size. This can be beneficial when dealing with large scale datasets with limited compute.\"], \"weaknesses\": [\"The paper does not provide simple IDM (intelligent driving models) agents that can be sometimes practical to have basic reactivity to the ego-agent.\", \"The authors mention that the current work is limited in properly utilizing the generated samples for optimal training.\", \"Just a thought: The implementation is in C++ and it provides a binding interface with Python environments. It would have been nice to have a mono-language (primarily Python based) tool as the model training and other related pipelines are mostly in Python.\"], \"questions\": [\"Were other agents in the scenes like pedestrians and cyclists also controlled? If so, what were the dynamics used to model their behavior if they were not logged?\", \"Nit: Ethical statement was missing?\", \"Nit: Can the x-axis in the center plot in Fig 5 be made to a log scale?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"GPU drive introduces a fast multi agent simulator build using C++ that helps you run complex scenarios especially related to self driving cars at scale built on top of the Waymo Open Motion dataset. This allows iterating on these scenarios quicker reaching greater than a million FPS thus allowing more experimentation runs and iterating/trying out different scenarios even on desktop grade GPU's.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. A multi agent simulator accelerated on the GPU iteration of over a million steps per second.\\n2. Very well written and structured code to run any experiment easily with a lot of easy experimentation code readily available. \\n3. Extensive results analyzing the sampling frequency of the simulation.\", \"weaknesses\": \"1. Figure 2 needs a better caption and an explanation\\n2. Designed to fit one exact dataset. A section explaining the effort required to integrate other datasets is desirable.\", \"questions\": \"1. Benchmarks consist of limitations because of the dataset. Can it be addressed by using another dataset ?\\n2. Stable baselines is not known that well for speed. Could other implementations of PPO have been used ?\\n3. Is there support for multi GPUs ? And if they do exist, an ablation or benchmark would be great for that\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Hopefully helpful clarification\", \"comment\": \"Hi,\\n\\nWe appreciate the rebuttal and discussion. We have attempted to clarify some of the points below.\", \"if_we_can_restate_your_objections_to_the_paper_they_appear_to_be_the_following\": \"- **The simulator is written in C++.**\\nWe assume since this was not brought up in the response that we are in agreement that this is not an issue since most simulators are actually written in C++ or other languages and just have python bindings.\\n- **The sim does not currently support domain randomization.**\\nWhile we agree that domain randomization would be an interesting feature to have, we are unclear why it is a critical feature. The simulator already comes with 450000 unique scenes and is not readily solved by extensive tuning of an RL algorithm. Is it possible that there was confusion and the existence of 450000 unique scenes was not clear? If so, we appreciate that being pointed out and will rewrite it.\\n- **Using IPPO instead of MAPPO.**\\nNote that our usage of an RL algorithm is not to make claims about algorithms but purely to point out how quickly scenes can be solved in the benchmark. If it would lead the reviewer to increase their score, we can run MAPPO. We are familiar with the distinction between IPPO and MAPPO and it is unclear to us why MAPPO would make a difference here since the empirical differences between the two are small. For example, in the MAPPO paper, centralized value functions only really helped in 3 and 4 player Hanabi.\\n- **The simulator does not provably contain mixed games.**\\nWe want to caution that there are two possible uses of the word mixed and we are not sure which one the reviewer is referring to. In the first case, mixed agents i.e. multiple policies, the simulator does support this and we have updated the text already to clarify it. If the reviewer is referring to the simulator containing general sum games, whether it does or does not is a function of what reward functions the user chooses to use. Under the default reward of goal reaching, the game is general sum and also contains conflicts of interest since this is a standard occurrence in driving (for example, at intersections). \\n- **\\\"The plot shows that as we increase the number of worlds (x), the throughput increases at the same rate (y),\\\" to be unclear. Does this imply that the simulator has no limit on parallelization? If there is a limit (assume you use commercial GPUs), linear scaling would eventually plateau as the number of environments increases.**\\nOur point, which we did not make clearly, was that the simulator experiences sublinear scaling but speed is still improving with more environments. We cannot run more than 1000 or so environments as we exhaust all the GPU memory at that point.\\n\\nHopefully, those points clear up some of the issues?\\n\\n**Finally, we want to re-address the point about user difficulty in programming the simulator.** We claim a mono-language implementation in Python is neither sufficient nor necessary to meet our dual goals of productivity and performance. \\nFrom a language perspective, the GPUDrive architecture can be decomposed as follows:\\n1. A learning layer (Python)\\n2. A bridging layer (C++)\\n3. CUDA kernels.\\n\\nWhy is it hard to do it any other way?\\n1. CUDA kernels must be written in CUDA-C++ because NVIDIA does not support JIT compilation for any language other than C++, and JIT compilation is upstream of maximizing performance on NVIDIA GPUs. For instance, by opting for JIT compilation instead of the more classic Ahead-Of-Time compilation we enable \\u201cRuntime LTO\\u201d. We refer the reviewer to [JIT LTO](https://docs.nvidia.com/cuda/cufft/ltoea/usage/jit_lto.html) for an explanation of the performance benefits of this feature.\\n2. Though written in C++, we argue the Bridging layer does not decrease productivity. A simplified view of the PyTorch architecture is that it exports Python bindings to CUDA kernels. Just as with GPUDrive's architecture, the Python bindings are bridged to CUDA via C++. We observe this makes PyTorch no less productive for end-users.\\n\\nWe could of course attempt to rewrite the simulator using an array-based programming language like PyTorch or Jax. However, implementing complex training environments using state-of-the-art simulation methods requires complex data structures and non-trivial control flow (traversing acceleration structures, collision solvers, state machines, conditional logic in functions) is cumbersome in array-based abstractions. For this reason, Jax and Torch-based environments rarely contain all these features.\"}",
"{\"summary\": \"This paper proposes GPUDrive, a GPU-accelerated multi-agent driving simulator designed to increase efficiency of learning-based systems. The simulator allows for loading expert trajectories from real-world driving datasets, can support multiple observation spaces (including e.g. LiDAR), and displays favorable throughput compared to other openly accessible simulators. The simulator, together with pre-trained goal-conditioned policies, is made openly available with accessible pythonic interfaces.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed simulator improves over alternatives in terms of sample efficiency. All of the design choices appear reasonable, while the underlying source code with pre-trained driving baselines will be released.\", \"The ability to load real-world driving datasets is extremely useful, while providing a variety of observation spaces is a great feature.\", \"Transparency about current limitation of the benchmark are very helpful for user adaptation.\"], \"weaknesses\": [\"While the focus of this paper is on providing a novel simulator, it would be very interesting to see some more complex behavior over longer time-horizons to fully capture the capabilities unlocked by the simulator (e.g. training a single agent policy with higher velocity limit to weave through a simulated traffic scene, etc.)\", \"Showcasing such behavior would likely require addressing the \\u201cAbsence of a map\\u201d limitation raised in the paper, in order to formulate more sophisticated reward function. An important question would then be how easily this could be integrated, and how much the absence of such a feature could hurt adaptation of the simulator.\", \"The discussion of batched simulators could be extended to include references [1-3], where [1] has driven many results in single-agent robot learning, while [3] considers heterogenous multi-agent settings\", \"Figure 3 mentions performance on an RTX 4080, while line 711 states RTX 8000\", \"Line 308: \\u201cnumber valid number\\u201d\", \"**References**\", \"[1] V. Makoviychuk, L. Wawrzyniak, Y, Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, and G. State. \\u201cIsaac gym: High performance gpu-based physics simulation for robot learning.\\u201d NeurIPS, 2021.\", \"[2] J. Panerati, H. Zheng, S. Zhou, J. Xu, A. Prorok, and A. P. Schoellig. \\\"Learning to fly\\u2014a gym environment with pybullet physics for reinforcement learning of multi-agent quadcopter control.\\u201d IROS, 2021.\", \"[3] M. Lechner, L. Yin, T. Seyde, T.-H. Johnson Wang, W. Xiao, R. Hasani, J. Rountree, and D. Rus. \\u201cGigastep - one billion steps per second multi-agent reinforcement learning.\\\"\\u00a0NeurIPS, 2024.\"], \"questions\": [\"How easily can the simulator be updated to efficiently provide map-like utilities that allow for lane-keeping rewards (re mentioned limitations)?\", \"Do you support loading multiple different polices for individual agents? Could they have different sampling rates? How would these aspects affect efficiency?\", \"How do traffic jams affect throughput (re BVH)? This could be an interesting experiment to add.\", \"In video scene_53.mp4, agent 4 displays rather jerky behavior when moving towards its goal - could you elaborate on the underlying reasons?\", \"In video scene_43.mp4, agents 1 and 10 seemingly disappear without reaching their goals - could you elaborate on this behavior?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": [\"The authors propose GPUDrive, a novel GPU-accelerated high fidelity simulator designed for multi-agent reinforcement learning in autonomous driving. This high-performance simulation environment allows for efficient training of reinforcement learning agents in complex, multi-agent scenarios. Furthermore, the authors highlight the simulator's ability to enable agents to navigate thousands of scenarios within hours.\", \"### Strengths\", \"Novelty: GPUDrive introduces a new approach to autonomous driving simulation that leverages the power of GPUs for high-performance and scalability.\", \"Practical relevance: The simulator provides a realistic and efficient training environment for autonomous driving, whose efficiacy is clear from experiments on real-world data from Waymo OMD.\", \"Scalability: GPUDrive can handle a large number of agents and complex scenarios, making it suitable for studying intricate interactions in traffic.\", \"Open-source: The availability of the code base and pre-trained agents promotes further research and development in the field.\", \"### Weaknesses\", \"The evaluation of GPUDrive seems limited to Waymo Open Motion dataset and IPPO algorithm. Further evaluation on a wider range of scenarios and different types of algorithms could strengthen the paper's claims. Additionally, improving the diversity of simulation agents seem to be critical.\", \"Empirical comparisons with state-of-the-art simulators other than Nocturne (e.g., Waymax) in terms of simulation speed, realism, and scalability would provide a better context for GPUDrive's performance and capabilities.\", \"The computational cost of running high-fidelity simulations in GPUDrive with a large number of agents could be significant. This could limit the accessibility of the simulator for researchers with limited computational resources.\", \"While the authors provide the code base, providing detailed documentation on how to install, configure, and use the simulator, along with clear explanations of the code and its functionalities, would facilitate wider adoption of GPUDrive.\", \"Decisions to accept / reject: This simulator can accelerate the development and testing of autonomous driving algorithms, as it allows researchers to evaluate the performance of their agents in a wide range of situations efficiently. Despite weaknesses pointed above, this work would be a useful addition to the conference. As such, I'm recommending conditional acceptance assuming the authors would address the above weaknesses.\"], \"additional_comments_on_reviewer_discussion\": \"While most reviewers were in favor of acceptance, one reviewer had several concerns, such as lack of domain randomization, presentation issues, and lack of experiments with additional algorithms other than IPPO. The author comments addressed the issue of domain randomization but other issues remain unaddressed. These issues could potentially be addressed in the revision and I highly recommend the authors to do so.\"}",
"{\"comment\": \"Thanking the authors for addressing the questions.\\nIt would be nice if the pending (orange colored) tasks are also completed and get mentioned in the final version of the paper. \\nI went through the comments and discussions from other reviewers as well and it seems that the authors have tried to carefully address their concerns as well. \\nAppreciation for the authors was putting efforts in this direction and releasing their work. This work is a good combination of research and practical engineering. I would like to maintain my score considering the limitations pointed out by some other reviewers. \\nBest wishes! :)\"}",
"{\"title\": \"Response to reviewer dLhs\", \"comment\": \"Thank you for recognizing the _significant speedup_ provided by our simulator and its potential to enable research in complex real-world scenarios. Below are our clarifications:\\n\\n### Sim functionality and mixing agent behaviors\\n\\n> _The paper claims compatibility with existing datasets but only demonstrates map loading, leaving other functionalities unclear. For example, the imitation learning experiment or mixing agent behaviors\\u2014some from datasets and others from RL agents during training._\\n\\nWe highlight that **GPUDrive fully supports mixing agent behaviors**; you can combine replay agents, scripted agents, pre-trained agents, and RL agents in a single rollout without sacrificing parallelism. $\\\\color{green}{\\\\text{We have added this and plans for future work in this regard to Section 5 of the paper.}}$\\n\\n### Domain randomization\\n\\n> _I believe one major advantage of parallel environments is that it allows you to do randomization across different environments (worlds). However, the paper lacks detail on whether GPUDrive supports this capability._\\n\\nDomain randomization is certainly a valuable feature for any simulator. While it is already possible to implement some types of domain randomization, such as randomizing goals or initial positions, we believe the primary challenge of driving all agents to the observed goals remains unsolved for now. We plan to explore domain randomization in future work and appreciate the reviewer\\u2019s suggestion.\\n\\n### Customization\\n\\n> _While GPUDrive offers a Python interface, I am curious how easy it is to customize those key elements in the environment given that the observation, reward, and dynamic functions are written in C++._\\n\\nThank you for pointing this out. We believe GPUDrive is easy to extend despite being in C++ for two key reasons:\\n\\n1. We offer **extensive Python bindings that cover most expected observations, reward functions, and dynamics**, allowing users to customize many components without having to touch the C++ code.\\n2. **C++ remains a popular language**, particularly in the robotics and autonomous vehicle communities, and many of our users have the necessary familiarity to extend the framework. Additionally, the framework handles parallelism, so even a basic understanding of C++ is sufficient for implementing new rewards or dynamics.\\n\\n### Algorithms\\n\\n> _Experiments only evaluate IPPO, despite the paper claims that it targets a mixed-motive setting._\\n\\nThe primary goal of the paper is to propose a fast multi-agent simulator, with the RL experiments and code included to help users get started. We note that **IPPO is a valid solver for mixed games and performs well empirically**. In many multi-agent problems, it has been observed that PPO is an effective solver (See e.g., MAPPO [1]).\\n\\nIf the reviewer is asking whether the \\\"independent\\\" aspect of IPPO is compatible with a general-sum game, the answer is yes\\u2014independence refers solely to decentralized training.\\n\\n### Questions\\n\\n> _In Figure 3, the speedup appears nearly linear. However, it would be helpful to examine scaling performance by adding more environments to identify saturation points and gain insights into system limitations._\\n\\nWe apologize for the confusion, please note that **this is a log-log plot**. The plot shows that as we increase the number of worlds (x) the throughput increases at the same rate (y). \\n\\n> _In Figure 5, do you use the CPU-parallel version of Nocturne?_\\n\\nYes! For a fair comparison, we plot both the single-CPU (striped blue) and the parallelized Nocturne using PufferLib (dotted blue, 16 CPUs). Note that the PufferLib version has been carefully designed to outperform naive Python multiprocessing.\\n\\n### Conclusion\\n\\nThank you for your valuable feedback! We hope we have addressed the reviewer's comments and are happy to further discuss any of these points. We kindly ask that if we have addressed the reviewer's concerns, they consider increasing their support for our paper.\\n\\n### References\\n\\n**[1]** Yu, C., Velu, A., Vinitsky, E., Gao, J., Wang, Y., Bayen, A., & Wu, Y. (2022). The surprising effectiveness of ppo in cooperative multi-agent games. Advances in Neural Information Processing Systems, 35, 24611-24624.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for your efforts during the discussion phase. After thoroughly reviewing your response, it appears that sim functionality and domain randomization remain part of the development plan and unsupported. For example, there is no experiments to verify the claim of supporting mixed agent behaviors. Domain randomization is also currently missing, which is an important feature in parallel simulator.\\n\\nAdditionally, there is no analysis provided on cooperative or competitive behaviors, given that the paper strengths the mixed-motive setting in the Intro. From my perspective, IPPO is a rather basic algorithm and differs significantly from MAPPO. But I note this is a minor weakness. \\n\\nFurthermore, I find the statement, \\\"The plot shows that as we increase the number of worlds (x), the throughput increases at the same rate (y),\\\" to be unclear. Does this imply that the simulator has no limit on parallelization? If there is a limit (assume you use commercial GPUs), linear scaling would eventually plateau as the number of environments increases.\\n\\nI appreciate the authors\\u2019 efforts in building this simulator. However, some claims appear to be overstated and unsupported by experimental results. As an engineering paper, I think it is not ready from being accepted and will maintain my score.\"}",
"{\"comment\": \"Thanks for your clarifications and I don't have anything else to add. This paper deserves the 8 I have given :)\"}",
"{\"title\": \"Response to reviewer AUkE\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback! We appreciate your recognition of GPUDrive's scaling benefits and mentioning that we present the wide range of simulator features in a _\\u201ccomprehensible way\\u201d_. Please find our response to your comments below:\\n\\n### Reactive Sim Agents\\n> _The paper does not provide simple IDM (intelligent driving models) agents that can be sometimes practical to have basic reactivity to the ego agent._\\n\\nWe agree that reactivity of simulation agents is crucial. While we do not include simple IDM agents, we **provide pre-trained reactive simulation agents** trained through self-play in the simulator that offer this basic reactivity. The behavior of these agents can be seen in the videos at https://sites.google.com/view/gpudrive/. We are actively working on expanding the $\\\\color{orange}{\\\\text{diversity and effectiveness of available sim agents}}$.\\n\\n### End-to-End Training Throughput\\n> _The authors mention that the current work is limited in properly utilizing the generated samples for optimal training._\\n\\nWe assume the reviewer is referring to the end-to-end training performance in Section 4.2. We acknowledge the concern regarding sample utilization. We have since addressed this issue and $\\\\color{green}{\\\\text{added an improved IPPO implementation, improving the training speed 10X: from 50K to 500K AFPS. Please see the updated Figure 5 in the paper.}}$\\n\\n### Implementation in C++\\n> _Just a thought: The implementation is in C++ and it provides a binding interface with Python environments. It would have been nice to have a mono-language (primarily Python based) tool as the model training and other related pipelines are mostly in Python._\\n\\nWe claim a mono-language implementation in Python is neither sufficient nor necessary to meet our dual goals of productivity and performance. From a language perspective, the GPUDrive architecture can be decomposed as follows:\\n1. A learning layer (Python)\\n2. A bridging layer (C++)\\n3. CUDA kernels.\\n\\nWhy is it hard to do it any other way?\\n1. CUDA kernels must be written in CUDA-C++ because NVIDIA does not support JIT compilation for any language other than C++, and JIT compilation is upstream of maximizing performance on NVIDIA GPUs. For instance, by opting for JIT compilation instead of the more classic Ahead-Of-Time compilation we enable \\u201cRuntime LTO\\u201d. We refer the reviewer to [JIT LTO](https://docs.nvidia.com/cuda/cufft/ltoea/usage/jit_lto.html) for an explanation of the performance benefits of this feature.\\n2. Though written in C++, we argue the Bridging layer does not decrease productivity. A simplified view of the PyTorch architecture is that it exports Python bindings to CUDA kernels. Just as with GPUDrive's architecture, the Python bindings are bridged to CUDA via C++. We observe this makes PyTorch no less productive for end-users.\\n\\nWe could of course attempt to rewrite the simulator using an array-based programming language like PyTorch or Jax. However, implementing complex training environments using state-of-the-art simulation methods requires complex data structures and non-trivial control flow (traversing acceleration structures, collision solvers, state machines, conditional logic in functions) is cumbersome in array-based abstractions. For this reason, Jax and Torch-based environments rarely contain all of our features while meeting our performance targets.\\n\\n### Questions\\n- Pedestrian and Cyclist Behavior: **Both pedestrians and cyclists are controlled** within the simulator. We use the same dynamic models as for vehicles, with smaller bounding boxes for pedestrians to reflect realistic behavior.\\n- Ethical Statement: $\\\\color{green}{\\\\text{The ethical statement has been added as Section 7 in the current version}}$.\\n- Log Scale for Fig. 5: We have the log plot. $\\\\color{green}{\\\\text{Please see the updated Figure 5 in the paper}}$.\\n\\n### Conclusion\\nThank you again for your thoughtful comments! We hope we have addressed all of your questions and welcome any further discussion. If all concerns are resolved, we kindly ask that you consider increasing your support for our paper.\"}",
"{\"title\": \"Checking in\", \"comment\": \"Dear reviewers,\\n\\nAs the discussion period is coming to an end, we kindly ask for your engagement with our rebuttal. We have put significant effort into addressing your concerns and would greatly appreciate any further feedback or discussion.\\n\\nThank you all for the time and thoughtful comments so far!\"}",
"{\"summary\": \"This paper presents GPUDrive, a GPU-based simulator for autonomous driving. The simulator is compatible with existing datasets and allows parallel simulations. The experiments show that it can train a policy with 25-40x training speedup against the baseline.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The GPU-based simulation is important in facilitating the training for complex real-world applications, highlighted by recent advances in robotics.\\n2. The experiment result shows significant wall-clock time speedup again baselines.\", \"weaknesses\": \"1. The paper claims compatibility with existing datasets but only demonstrates map loading, leaving other functionalities unclear. For example, the imitation learning experiment or mixing agent behaviors\\u2014some from datasets and others from RL agents during training.\\n2. I believe one major advantage of parallel environments is that it allows you to do randomization across different environments (worlds). However, the paper lacks detail on whether GPUDrive supports this capability.\\n3. While GPUDrive offers a Python interface, I am curious how easy to customize those key elements in the environment given that the observation, reward, dynamic functions are written in C++. \\n4. Experiments only evaluate IPPO, despite the paper claims that it targets at mixed motive setting.\", \"questions\": \"1. In Figure 3, the speedup appears nearly linear. However, it would be helpful to examine scaling performance by adding more environments to identify saturation points and gain insights into system limitations..\\n2. What is the scaling of speedup with respect to the number of agents in the environment? e.g., fix the number of environments and scales the number of agents?\\n3. In Figure 5, do you use the CPU-parallel version of Nocturne?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer Y9u7\", \"comment\": \"Dear Reviewer,\\n\\nFirst of all, thank you for your kind words about the _clarity_ and _structure_ of the paper and code. We are excited about the potential of GPUDrive to **enable RL at scale** on a single GPU and make the algorithmic design process _interactive_. Please find our responses to your comments below:\\n\\n### Figure 2\\n> _Figure 2 needs a better caption and an explanation_\\n\\nWe agree that the caption and explanation could be more detailed. We have $\\\\color{green}{\\\\text{updated the place and caption and added a more comprehensive explanation}}$ in the current version (see Figure 1).\\n\\n### Integrating Multiple Datasets\\n> Designed to fit one exact dataset. A section explaining the effort required to integrate other datasets is desirable.\\n\\nWe appreciate the suggestion to explain the effort required to integrate other datasets. We are actively working on this and have $\\\\color{green}{\\\\text{added a new paragraph in the current version (see Section 5)}}$ to address this and will point to our data processing code after the rebuttal period.\\n\\n### Questions\\n- Benchmarks and Dataset Limitations: Yes, we are actively $\\\\color{orange}{\\\\text{integrating additional datasets}}$, such as the Nuscenes (https://www.nuscenes.org/); this simply requires putting the files into our JSON format. Note that most large, diverse datasets have similar limitations as they are collected from the sensors of a single vehicle. While no dataset is perfect, we believe that working with human data, even with its limitations, provides significant value.\\n- Stable Baselines and PPO Speed: We acknowledge the limitations of the Stable Baselines 3 (SB3) IPPO implementation. In response, we have implemented an $\\\\color{green}{\\\\text{improved version of IPPO, achieving an end-to-end training throughput of 500K, a 10X speedup}}$ compared to the previous SB3 version.\\n- Multi-GPU Support: We do not currently have multi-GPU support as this is a property of the training code and not the simulator. It is straightforward to add multi-GPU support by wrapping the model in torch DDP but we do not mention this in the work. We can include it if it feels important to the reviewer.\\n\\n### Conclusion\\n\\nThank you for helping us make this work better! We hope we have addressed your comments properly and welcome further discussion.\"}"
]
} |
ERce2rgMQC | Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements | [
"Jingyu Zhang",
"Ahmed Elgohary",
"Ahmed Magooda",
"Daniel Khashabi",
"Benjamin Van Durme"
] | The current paradigm for safety alignment of large language models (LLMs) follows a _one-size-fits-all_ approach: the model refuses to interact with any content deemed unsafe by the model provider. This approach lacks flexibility in the face of varying social norms across cultures and regions. In addition, users may have diverse safety needs, making a model with _static_ safety standards too restrictive to be useful, as well as too costly to be re-aligned.
We propose _Controllable Safety Alignment_ (CoSA), a framework designed to adapt models to diverse safety requirements without re-training. Instead of aligning a fixed model, we align models to follow _safety configs_—free-form natural language descriptions of the desired safety behaviors—that are provided as part of the system prompt. To adjust model safety behavior, authorized users only need to modify such safety configs at inference time. To enable that, we propose CoSAlign, a data-centric method for aligning LLMs to easily adapt to diverse safety configs. Furthermore, we devise a novel controllability evaluation protocol that considers both helpfulness and configured safety, summarizing them into CoSA-Score, and construct CoSApien, a _human-authored_ benchmark that consists of real-world LLM use cases with diverse safety requirements and corresponding evaluation prompts. We show that CoSAlign leads to substantial gains of controllability over strong baselines including in-context alignment. Our framework encourages better representation and adaptation to pluralistic human values in LLMs, and thereby increasing their practicality. | [
"large language models",
"safety alignment",
"pluralistic alignment"
] | Accept (Poster) | https://openreview.net/pdf?id=ERce2rgMQC | https://openreview.net/forum?id=ERce2rgMQC | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xah5kRVyTa",
"vA1O1dSfao",
"scCD2e5EbT",
"rTwR6UdnLF",
"r21dW8xXXz",
"mSJb1fx05Q",
"kSoLzFiF5J",
"j32kAB3TAK",
"eC2y9d91WX",
"bIMMXOaZSj",
"WNZlZIMdYp",
"VgOwNRKHJv",
"TP2U8DbDFy",
"TKK1CSpnfy",
"R31yoHDDo0",
"QBDGHeMiKv",
"P2E84frrED",
"NIpOmyEVIb",
"L8l27CVYAC",
"JewKkfnvd1",
"DlFZuQfczZ",
"AzJscNikgS",
"8yxOYLKSzd",
"8pG3tb8zUW",
"7TPSwSmeU4",
"5P253ukAin",
"1Xzv5v8lq6",
"0lufWdm3Vx",
"0EWoBweCuX"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"decision"
],
"note_created": [
1732530494898,
1730675847132,
1731861824231,
1732254868422,
1731862296126,
1732908844174,
1731862009889,
1730722185661,
1732377903022,
1733088146939,
1732378344399,
1731963459803,
1731861574422,
1731861458351,
1732690269173,
1732560320602,
1733263338455,
1732555532916,
1730077334764,
1734692538169,
1732561150598,
1731861042071,
1731862088590,
1731862611565,
1731861140815,
1732915404251,
1730698726019,
1731862395173,
1737523826552
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7249/Reviewer_c6ps"
],
[
"ICLR.cc/2025/Conference/Submission7249/Reviewer_c6ps"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Reviewer_pCoa"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Reviewer_hHMw"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Reviewer_6dSX"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Reviewer_6dSX"
],
[
"ICLR.cc/2025/Conference/Submission7249/Area_Chair_Ty2A"
],
[
"ICLR.cc/2025/Conference/Submission7249/Reviewer_hHMw"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7249/Reviewer_c6ps"
],
[
"ICLR.cc/2025/Conference/Submission7249/Reviewer_pCoa"
],
[
"ICLR.cc/2025/Conference/Submission7249/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"comment\": \"I would like to thank the authors for their detailed responses and especially for showing that their CoSAlign method can be incorporated with the cascade methods. The results on the seen configs of the dataset look promising - as one might hope, it appears to obtain \\\"the best of both worlds\\\". One final follow-up question that I have in order to solidify this point is regarding the unseen split of the dataset that the authors say \\\"follows the same pattern\\\". What are the exact results?\"}",
"{\"summary\": \"The paper tackles the problem of LLM alignment through the use of safety configs that can be modified at inference time, allowing for more controllable and diverse alignment, while maintaining helpfulness. In the proposed method, LLMs are finetuned not by rewarding the response of the model with respect to the prompt, but also with respect to diverse safety configs, in order to generalize to unseen safety configs which have different safety requirements. The authors conduct a thorough evaluation of their relative method to existing ones.\\n\\nThe proposed method helps remove the ambiguity of alignment and helpfulness definitions from previous works by scoring model responses not only with respect to queries but also with respect to safety configs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Topic of interest - specialized safety configs is a topic of interest for practical use of LLMs in organizations that have different safety requirements.\", \"Alignment that properly incorporates both safety and helpfulness is important. Specifically the dataset used that couples safety and helpfulness is interesting.\", \"Finetuning a model on conversations conditioned on diverse policies for alignment seems to be a good strategy to overcome the genericness of safety responses.\", \"Most interesting in my opinion is that the concept of adding a safety config as a prefix for a conversation removes ambiguity from the definition of safe and helpful - instead of relying on an existing dataset that teaches a specific version of safe and helpful, you teach a model the concept of safe and helpful with respect to a safety config, which is more powerful and less ambiguous (it is \\\"easy\\\" to judge if a response is safe and helpful with respect to a safety config, it is hard and not well-defined without one).\"], \"weaknesses\": [\"Use of the CoSA-score for evaluation - something a bit concerning to me about the CoSA-score is the way it merges safety and helpfulness into one value. For example, a negative score can rise without safety being altered at all simply by making the model less helpful (if helpfulness drops to zero, the score becomes zero which is better than negative). Similar things can also happen with positive scores and this indeed does seem to happen in some of the reported results, where the score rises mainly by reducing helpfulness (table 3, \\\"seen configs\\\", \\\"in-context alignment\\\", INST+ICA+5shot vs SFT+ICA+5shot). Furthermore, since the helpfulness scores are between {0,1,2,3,4,5}, you may lose a lot of information - e.g. if 100% of responses are safe, you get the same score in the case that they all have helpfulness = 3 and in the case that half are 0 and half are 5, which are somewhat different cases, as in the first the model is essentially helpful on all prompts and in the second only on half. Thus the other numbers reported in the tables are important as they do remove this ambiguity - by reporting helpful+safe and helpful+unsafe you get a complete picture. I did not see a discussion on issues such as these that may arise from the CoSA score, I believe it needs to be discussed.\", \"Higher helpfulness and lower safety (relative to cascade methods) - following the above point, when separating helpful+safe and helpful+unsafe, indeed CoSAlign has a higher helpful+safe score, but also a higher helpul+unsafe score than the cascade methods. I am not sure how to compare CoSAlign's 43% safe+helpful and 8% unsafe+helpful with the SFT+Cascade+Oracle's 29% safe+helpful and 0% unsafe+helpful. While having a lower safe+helpful score, Cascade seems very robust to being unsafe compared to CoSAlign, which is not reflected in the CoSA-score, and I think needs to be discussed.\", \"Related works - while the main selling point of the method proposed in this work is safety diversity, it also has a lot to do with the topic of maintaining the trained model both safe and helpful, I believe this can be discussed and more related works need to be mentioned in such a discussion (to give a few examples [1,2,3]).\", \"All that being said, I do appreciate the approach and am open to a fruitful discussion.\", \"[1] - https://arxiv.org/abs/2309.07875\", \"[2] - https://arxiv.org/abs/2308.01263\", \"[3] - https://arxiv.org/abs/2401.16332\"], \"questions\": [\"Is there some way to combine the cascades method with the CoSAlign method?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer pCoa\", \"comment\": \"We genuinely appreciate reviewer `pCoa` for their insightful review. We\\u2019re thankful that you recognize our work contributes to the \\u201cpractical importance for effective and safe usage of LLMs in diverse applications,\\u201d and our paper is well-supported by human evaluations. We address your questions and concerns below:\\n\\n \\n\\n## Response to weaknesses: \\n\\n**On \\u201cIt may not be feasible to specify all aspects of safety, even by domain experts, in natural language\\u201d**: We\\u2019d like to clarify that safety configs do not necessarily need to specify \\u201call aspects of safety\\u201d: just like what we do in our synthetic configs (exemplified in Appendix A.11), the safety config be used as providing exceptions to regular \\u201cone-size-fits-all\\\" alignment, indicated by the instruction \\u201cOn the other hand, you must continue to ensure safety in all other categories.\\u201d Therefore, only the difference between the desired safety behavior and a \\u201cone-size-fits-all\\u201d safety aligned model need to be specified, greatly reducing the overhead of producing safety configs. \\n\\nIn practice, we show that starting CoSAlign from a one-size-fits-all safety aligned model can effectively retain safety on input distributions that are not directly specified in the safety config. In this case, the model will \\u201cdefault\\u201d to the one-size-fits-all safety behavior. In our new experiments shown in **\\u201cResponse to all reviewers\\u201d** section, we find that CoSAlign can significantly increase adversarial robustness against popular jailbreak attacks and reduce over-refusal, while these behaviors are not directly specified in the safety config at all. \\n\\n \\n\\n**On the difference between our work and pluralistic alignment in non-safety settings**: Safety is a special setting because while current safety alignment techniques make the model robust to adversarial attacks, they also severely curtail the steerability of models on safety. Instructing models to modify safety standards can be seen as a jailbreak, thus not possible with one-size-fits-all safe models. We experimentally show this is indeed the case due to the ineffectiveness of in-context alignment in Section 5.1. On the other hand, for non-safety settings such as cultural alignment, techniques such as Anthropological Prompting [1] has shown to be an effective baseline. However, similar prompting-based techniques do not apply to the safety settings, motivating our CoSA framework and CoSAlign approach. Additionally, we conduct thorough safety-specific evaluation and propose metrics, benchmarks and data synthesis methods that are tailored to the safety-specific settings. All these contributions are novel and essential for studying pluralistic safety alignment. \\n\\n \\n\\n**On determining disallowed content**: Similar to your first point on \\u201cfeasibility to specify all aspects of safety\\u201d, we\\u2019d like to clarify that since we start with a one-size-fits-all safety aligned model, the safety guideline only need to specify the difference between the \\u201cdefault\\u201d safety behavior. This design makes the model robust to prompts not specified in the config, as shown in results in the \\u201cResponse to all reviewers\\u201d section. \\n\\nIn general, determining what should be the disallowed content is a key sociotechnical problem and ongoing research. Approaches such as Constitutional AI [2], Collective Constitutional AI [3], and Simulated Deliberative Democracy [4] have been proposed to import societal value to determine what should be considered allowed/disallowed. We propose an alternative approach where this boundary can be flexibly and efficiently adapted during test time by adjusting the specification in safety config. \\n\\n> I think it may be advantageous to evaluate CoSApien also with the kinds of content that an actual LLM can generate for the different kinds of prompts, for a real proof of its prompts being effective. \\n\\nWe\\u2019d like to clarify that our CoSApien benchmark curated real-world safety configs by professional red teaming specialists who are adept with real-world safety scenarios. We also collected diverse human-written prompts for each config, which are natural and what actual LLMs are often asked about. Please refer to Section 4 for more details. \\n\\n**On details about risk taxonomy construction**: due to space constraints, we had to move some of the details to appendix, and please see a detailed description in Appendix A.2. To create this risk taxonomy, we first perform prompt clustering, and produce cluster descriptions with GPT-4o. Next, we manually edit the descriptions of largest clusters and produce the final risk taxonomy. We will also make the reference to the appendix clearer in the main text.\"}",
"{\"comment\": \"Thanks to the authors for their responses and new experiments. I think my score is suitable and would like to retain it. I would recommend the authors to explicitly mention in the main paper that CoSA starts off with a safety-trained model.\"}",
"{\"title\": \"Response to reviewer c6ps\", \"comment\": \"We deeply appreciate reviewer `c6ps` for their thoughtful feedback. We are thankful for your recognition of the practical relevance of our topic, the advantages of our proposed alignment method, and most interestingly, the framework of obtaining controllability through safety configs. We now address your questions and concerns below.\\n\\n \\n\\n## Response to weaknesses \\n\\n> For example, a negative score can rise without safety being altered at all simply by making the model less helpful (if helpfulness drops to zero, the score becomes zero which is better than negative). Similar things can also happen with positive scores \\n\\nThanks for highlighting this point! We argue that instead of a weakness, this is characteristic actually a benefit of CoSA-Score: We believe that unsafe responses that are *less helpful* should indeed be preferred over unsafe responses that are *more helpful*. For instance, if the prompt requests criminal advice, a highly helpful response would likely cause greater harm to society than one that is less helpful. By reducing helpfulness in unsafe responses, the potential negative impact is mitigated, aligning with our goal of promoting safer model behavior. Importantly, it is only by considering safety and helpfulness together, as CoSA-Score does, that such nuanced trade-offs can be effectively addressed. For positive scores, CoSA-Score's construction allows safe responses that are more helpful to be preferred over safe responses that are less helpful, as one would intuitively expect. We\\u2019d love to provide further clarifications and engage in further discussions on this design if you are interested. \\n\\n \\n\\n> Furthermore, since the helpfulness scores are between {0,1,2,3,4,5}, you may lose a lot of information [...] Thus the other numbers reported in the tables are important as they do remove this ambiguity - by reporting helpful+safe and helpful+unsafe you get a complete picture. \\n\\nWe agree on your point that there might be more than one situation where the aggregated CoSA-Score is the same (e.g., all responses are somewhat helpful v.s. half responses are very helpful & the other half is not helpful at all). This is the tradeoff we are making by summarizing safety and helpfulness into a single score. We believe this design is still very meaningful because it allows us to (1) obtain an aggregated metric across all safety configs and prompt distributions to measure overall controllability (2) take into consideration of the nuanced trade-offs between helpfulness and safety together (related to the first point above). As you mentioned, we also report the helpful+safe and helpful+unsafe responses as focused metrics on positive-scored and negative-scored responses, making it easier to capture the complete picture. Thanks so much for pointing out these important nuances and we will add corresponding discussions to our draft. \\n\\n \\n\\n> Higher helpfulness and lower safety (relative to cascade methods) \\n\\nWe acknowledge that CoSAlign has a drastically higher rate of helpful+unsafe responses at the cost of slightly higher rate of helpful+unsafe responses relative to the Cascade methods. Note that this result is not so surprising, given that Cascade methods specifically focus on improving safety by filtering out unsafe responses deemed by a classifier and replacing them with refusals. Therefore, in this case, the balance between helpfulness and safety is very important because a very safe but not helpful response is useless. As determined by the much higher CoSA-Score, which is specifically designed to capture the trade off between helpfulness and safety, CoSAlign clearly outperforms Cascade method. Also note that Cascade-Oracle is a very strong baseline that uses the evaluator model to conduct filtering, since it\\u2019s guaranteed to have 0% unsafe responses. Even in this case, CoSAlign outperforms Cascade-Oracle because there is a limited rate of helpful+safe responses. \\n\\nInspired by your suggestions, we have also applied the Cascade methods on top of CoSAlign methods and Cascade-Oracle demonstrated additional gains. Please see **\\u201cResponse to all reviewers\\u201d** section for details. \\n\\n**On related works**: Thank you so much for your suggestion. We agree that these can be a nice addition to our related work discussion. We will add a dedicated paragraph on balancing safety and helpfulness.\"}",
"{\"title\": \"Further discussion\", \"comment\": \"Thanks again reviewer c6ps. We would love to hear back from you regarding our previous response. Have we addressed your question adequately? We are happy to engage in further discussion in the remaining few days.\"}",
"{\"title\": \"Response to reviewer pCoa - continued\", \"comment\": \"**On \\u201cThe experiments are limited to Llama-3.1 8B and GPT-4o models\\u201d**: We experiments on four model variants, Llama-3.1-8B-Instruct, Llama-3.1-8B-SFT, GPT-4o, and GPT-4o-mini, within the Llama and GPT model families, and covers both popular open-source and proprietary models. Thank you so much for your suggestions of more models. We believe that the Llama model family is a good representation of open-source models, thus we don\\u2019t see an urgent need to experiment with a similar open-source model Mistral, but we\\u2019ll take that into consideration.\\n\\n \\n\\n> I would suggest that the authors add some justification for not doing DPO on GPT-4o around line 472. \\n\\nThanks for mentioning this point \\u2014 we actually already have a footnote on page 9 that clarifies only SFT is publicly available for GPT. \\n\\n \\n\\n> The authors should discuss the generally lower (hence better) helpful+unsafe values of the Cascade method over CoSAlign. \\n\\nWe have added additional results of CoSAlign+Cascade methods and included extended discussions. Please see the \\u201cResponse to all reviewers\\u201d section for more details. \\n \\n\\nThank you so much for your comments on improving the terminology use & fixing typos. We will make sure to fix these issues in the manuscript. \\n\\n\\n## Response to questions \\n\\n**On ambiguous or contradictory safety configs**: While it\\u2019s challenging to formally define what configs are ambiguous and modeling ambiguity in general [5], qualitatively we find that when a CoSAlign-tuned model face prompts that are under-defined in config, or low-quality configs in general, the model will \\u201cdefault\\u201d to standard safety behavior as achieved by the one-size-fits-all aligned base model. We find this phenomenon very interesting because it shows that the model is protected by regular safety alignment when configs do not apply. \\n\\n \\n\\n> Line 114-115: \\\"However, because of the complexity of safety configs and the difficulty of constructing high-quality demonstrations at scale\\\" - the complexity of safety configs is a problem for this work too. Moreover, in-context learning requires only a handful of demonstrations, hence the argument of absence of demonstrations at scale is void. Hence, I think these arguments against in-context learning are not very strong. \\n\\nWe\\u2019d like to clarify that the difficulty for in-context learning lies in the issue of constructing a set of demonstrations that fully covers the desired safety config. Given real-world safety configs as those in CoSApien, we find it very difficult to fully specify the desired safety behavior with examples alone. For example, \\u201callowing violence but excluding descriptions of severed body parts or limbs\\u201d requires multiple carefully designed demonstrations but is easily described in natural language. \\n\\n \\n\\n> Line 183: How do you check/verify coverage? Line 405: Manually checking that the test set contains all the 3 kinds of prompts is ok, but what is ensuring that to be the case in general for all the prompts? As far as I understand, it is just a dataset containing some prompts, not necessarily according to some safety configs. \\n\\nWe\\u2019d like to clarify that both test sets are test safety configs paired with prompts relevant to the configs (detailed in Line 173 to 183). As detailed in Appendix A.12, during the data curation stage for CoSApien, we produce three types of targeted prompts (allowed, disallowed, partial) for each test config and manually verified all prompts adhere to the desired types. For CoSAlign-Test, we conduct human verification of the automatically produced prompt risk category labels on a subset of 600 prompts and find a high human agreement rate of 89.8%. We then use this category label as proxy to select prompts and ensure each test config is covered. This is detailed in A.7 where we provide the full breakdown by prompt types. \\n \\n> How would GPT-4's own safety alignment affect its responses as judge-safe(.) in line 188? \\n\\nWe kept track of the proportion of judge-safe requests that are refused by GPT-4, and find this number to be very low in all cases (less than 2% of responses). We also qualitatively find that when GPT-4 does not produce a refusal, it gives reasonably good rationales and results. Note that besides GPT-4 evaluation, we also conduct human evaluation on CoSApien and find consistent results on judge-safe. \\n \\n> Can the CoSA-Score be made more useful by assigning -1 to refusals/unhelpful responses? \\n\\nThanks for the suggestion! We believe it\\u2019s better to give CoSA-Score to refusals/unhelpful response because these responses should be more preferred than responses that are both helpful and unsafe under the current config, which lead to negative scores. A responsible system should never give unsafe responses that are helpful to conducting harmful activities, but if the user query is not answerable given the current safety guidelines, a refusal is a reasonable response.\"}",
"{\"summary\": \"The paper introduces CoSA, a framework designed to adapt LLMs to diverse, context-sensitive safety requirements in real-time without retraining. CoSA allows users to define safety configs, which can be adjusted on-the-fly, enabling flexible safety alignment. The framework includes CoSAlign, a method for training the model to follow these configs using synthetic preference data, and introduces CoSA-Score and CoSApien, a scoring metric and benchmark, respectively, to evaluate both helpfulness and safety adherence of model responses. CoSA demonstrates strong adaptability to unseen safety configs, promoting a pluralistic approach to safety in LLMs. However, the paper has limitations in its mathematical foundations, including formal guarantees for controllability and robustness, which could impact its theoretical reliability in diverse or adversarial scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. CoSA enables customizable safety configurations, shifting from a one-size-fits-all model to a flexible approach, valuable for applications with varied cultural, legal, or organizational safety needs.\\n2. Different safety configs in real-time, allowing rapid adaptation to new safety requirements without retraining.\\n3. CoSAlign uses synthetic data generation, error scoring, and preference optimization to streamline fine-tuning for safety, reducing manual annotation needs for large-scale applications.\", \"weaknesses\": \"1. The error-scoring mechanism used in CoSA assigns arbitrary penalties for different categories of errors (small penalties for allowed risks, large for disallowed, and medium for non-responsive answers). Without a rigorous, data-driven foundation for these penalty values, there is a risk that these scores might not accurately reflect real-world preferences or safety needs.\\n2. Preference optimization may converge poorly due to data noise, potentially causing inconsistent or suboptimal model behavior. Convergence properties under various conditions are not well analyzed.\\n3. It does not establish that the model will consistently adhere to the given safety configs across different input distributions. In complex or adversarial input settings, the model might fail to control responses reliably, highlighting the need for a experimental prove or bound controllability performance.\\n4. CoSAlign relies heavily on synthetic preference data generated by combining safety configs with diverse prompts, but the distribution of this synthetic data may not match real-world query distributions. Mathematically, this could result in a distributional shift where the model\\u2019s learned preferences are poorly calibrated to actual use cases, limiting generalization.\\n5. CoSA\\u2019s preference optimization relies on a risk taxonomy with a finite set of categories. If the taxonomy is not exhaustive, the model might overfit to specific risk categories observed in training, failing to generalize to novel types of risks. This overfitting issue could be mitigated by formalizing the model\\u2019s entropy over response categories or by incorporating latent variable models to allow for broader risk representations.\", \"questions\": \"1.\\tHow\\u2019s the performance on the popular metric Attack success Rate (ASR) by the framework?\\n2.\\tWhat is the performance on popular jailbreak methods like PAIR, DeepInception, GPTFuzzer?\\n3.\\tThere was no discussion on the oversafety performances like XSTest or OKTest?\\n4.\\tI see there are plenty of papers which should come in related work or motivation of this work (training free) but they were not cited (here are some which I have found in single search but there are many more.)?\\n\\n[1] SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding (https://arxiv.org/abs/2402.08983), \\n[2] Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations\\n(https://arxiv.org/abs/2406.11801), \\n[3] SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models (https://arxiv.org/abs/2406.12274), \\n[4] Controlled Text Generation via Language Model Arithmetic (https://arxiv.org/abs/2311.14479)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Invitation for Comments and Clarifications\", \"comment\": \"Dear reviewer `hHMw`,\\n\\nWe greatly value your feedback and have provided clarifications to your questions and additional experiments on jailbreaking and over-refusal as you requested. To ensure that we have properly addressed your concerns, we would greatly appreciate if you could review our responses and provide any further comments. We look forward to engaging with you before the discussion period ends.\\n\\nThank you for your time and consideration.\"}",
"{\"title\": \"A last reminder to reviewer hHMw\", \"comment\": \"Dear reviewer hHMw,\\nTomorrow is the last day for you to respond to our rebuttal. Would you please consider our last response and let us know what you think? Thanks,\"}",
"{\"title\": \"Invitation for Further Discussion\", \"comment\": \"Dear Reviewer `c6ps`,\\n\\nWe sincerely appreciate your thoughtful feedback and have provided clarifications to your questions, including conducting new experiments on combining CoSAlign with Cascade methods as per your request. Your insights have been very constructive, and we would greatly value your review of our responses to ensure we have fully addressed your concerns.\\n\\nPlease don\\u2019t hesitate to share any additional comments and we are eager to engage with you further before the discussion period ends. Thank you for your time and consideration.\"}",
"{\"comment\": \"Thanks for the detailed comment! I would like to raise my score.\"}",
"{\"title\": \"Response to reviewer hHMw - continued\", \"comment\": \"## Response to questions\\n\\n**Performance on jailbreak method & ASR**: Per your suggestion, we have conducted additional experiments on 3 popular jailbreak attacks and shown the results above in the **\\u201cResponse to all reviewers\\u201d** section. We find that CoSAlign lead to significantly improved adversarial robustness against these attacks. \\n\\n \\n\\n**Oversafety performance**: We conduct experiments on XSTest, shown in the **\\u201cResponse to all reviewers\\u201d** section, and find CoSAlign lead to notably less refusal compared to Llama3.1-8B-Instruct. This indicates the increased controllability can allow model to better determine the safety boundary for refusal. \\n\\n \\n\\nThank you for your suggestions of the related work on efficient alignment. Note that we already have a focused discussion on inference-time alignment which focuses on efficient training-free approaches, but we will add these to the manuscript for further completeness. \\n\\n \\n\\n## References \\n\\n[1] [A General Theoretical Paradigm to Understand Learning from Human Preferences](https://arxiv.org/abs/2310.12036) \\n\\n[2] [KTO: Model Alignment as Prospect Theoretic Optimization](https://arxiv.org/abs/2402.01306) \\n\\n[3] [SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734) \\n\\n[4] [Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations](https://arxiv.org/abs/2312.06674) \\n\\n[5] [BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset](https://arxiv.org/abs/2307.04657) \\n\\n[6] [WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs](https://arxiv.org/abs/2406.18495)\"}",
"{\"title\": \"Response to reviewer hHMw\", \"comment\": \"We sincerely thank reviewer `hHMw` for their thoughtful review. We are grateful that you find our controllable safety alignment method is \\u201cvaluable for applications with varied cultural, legal, or organizational safety needs\\u201d and \\u201c[reduces] manual annotation needs for large-scale applications.\\u201d We hope our response below addresses your concerns:\\n\\n## Response to Weaknesses \\n\\n**On error-scoring mechanism assign arbitrary penalties**: We\\u2019d like to clarify that the purpose of the error penalties is to ensure responses that *do not violate the safety configs* and *maintains helpfulness*, are preferred over responses that do not satisfy these two criteria. It is only used in the response pairing stage and only the relative error between two responses matters. For example, suppose there are responses A, B, and C: \\n\\n- Response A contain one disallowed risk but address the question -> error $\\\\beta$ \\n- Response B contain one allowed risk but does not address the question -> error $\\\\alpha+\\\\gamma$ \\n- Response C contain one allowed risk and address the question -> error $\\\\alpha$ \\n\\nAs long as $\\\\alpha < \\\\gamma < \\\\beta$ is satisfied, we will prefer Response C over A, C over B, and B over A no matter what the absolute value of these hyperparameters are. We set $\\\\alpha=0.1, \\\\gamma=1, \\\\beta=3$. Further tuning these hyperparameters might lead to additional gains in rare cases (e.g., when the response contains many types of allowed risks), but we empirically find that CoSAlign is already very effective (see Table 3, 4, 5 in Section 6). \\n\\n**On convergence properties of preference optimization**: We respectfully disagree with the reviewer that this should be listed as a weakness of our work. We do not discuss convergence properties or other math details simply because we are not proposing any preference optimization algorithm. Instead, we are just utilizing the existing DPO algorithm, one of the most commonly used and widely adopted algorithms for preference optimization, in our data-centric CoSAlign method. We clarify that our main contribution in this work (summarized already in Line 92-96) is proposing a comprehensive framework (defining the task, setup, evaluation protocol, benchmark, and the CoSAlign method) on controllable safety alignment. While the convergence properties of preference optimization are an active research area, it is not the focus of this work. Moreover, our framework makes no assumptions of the preference optimization technique used. It is straightforward to utilize more advanced algorithms such as IPO [1], KTO [2], and SimPO [3] with CoSAlign. \\n\\n**On input distribution shift for safety configs**: Indeed, the distribution of safety configs may change between training and real-world deployment, and maintaining sufficient controllability under config distribution shift is an important issue, and we **already conducted experiments in Section 6.1** (discussed in Line 450-464). We summarize it here again for your convenience: we conducted experiments on config distribution shift by training on CoSAlign-Train, which only contain synthetic categorical risk categories, and test on CoSApien, our proposed benchmarks consist of complex, fine-grained real-world configs (see examples in Appendix A.12). Human evaluation (Table 4) shows that CoSAlign still possesses strong controllability in this out-of-distribution setting and outperforms all baselines. This is strong evidence that CoSAlign can generalize from simpler training configs to more complex test configs. Please see Section 6.1 for more details. \\n\\nIt is difficult to theoretically quantify the controllability of CoSAlign in all settings, where safety configs can be arbitrarily complex. Since our focus and main contributions are proposing a comprehensive framework of controllable alignment, we leave the theoretical aspect of quantifying controllability to future work. \\n\\n**On CoSAlign\\u2019s risk taxonomy**: We\\u2019d like to clarify that the risk taxonomy is only used during the training of CoSAlign in order to synthesize large-scale diverse and relevant preference data. Although this risk taxonomy is likely not exhaustive, we show in Table 4 that CoSAlign maintains controllability gain on the real-world CoSApien dataset and generalizes to novel risks that are more fine-grained than the training categories. Moreover, Table 3 and 5 shows CoSAlign is still effective on unseen synthetic configs (configs that contain risk categories held-out from training). These results provide plenty of evidence that our current risk taxonomy is diverse enough to not overfit to training categories. Using a risk taxonomy with a finite set of categories is also a common approach in recent works such as Llama Guard [4], BeaverTails [5], and WildGuard [6]. Therefore, we argue that risk taxonomy is an effective *practical* approach for large-scale data synthesis in controllable safety alignment.\"}",
"{\"title\": \"Thanks for your continued engagement\", \"comment\": \"Dear reviewer `hHMw`,\\n\\nWe sincerely appreciate your follow-up response. Note that our rebuttal mainly serves to clarify potential misunderstandings in our work. We conducted additional experiments to answer your questions and address your concerns within the scope of our work.\\n\\nRegarding your follow-up message on \\u201cI fell that the key points I raised should have been addressed in the first version hence not able to increase much,\\u201d we are wondering if there are any specific key points that you feel have not yet been addressed after our rebuttal? We\\u2019d love to further improve our work based on your feedback or provide further details if needed. \\n\\nSince we have already conducted additional experiments and presented the results, we believe this should be considered equally as the results presented in the initial submission, as the ICLR discussion period is a crucial stage of the submission process and we will include all results in the camera-ready version, if accepted. Thanks so much for your continued engagement and we are always looking to improve our work with your input and if needed, provide further clarification.\"}",
"{\"title\": \"A second reminder about considering our response\", \"comment\": \"Dear reviewer hHMw,\\n\\nOnly one day is left until the end of the discussing period. Would you please consider our response and let us know if we have not addressed any of your concerns?\"}",
"{\"title\": \"Summary of the discussion period\", \"comment\": \"Dear all reviewers and chairs,\", \"we_would_like_to_express_our_sincere_gratitude_to_all_reviewers_for_their_constructive_feedback_and_recognition_of_our_contributions\": \"reviewers find our framework **\\u201cvaluable for applications with varied cultural, legal, or organizational safety needs\\u201d** (`hHMw`) with **\\u201cpractical importance for effective and safe usage of LLMs in diverse applications\\u201d** (`pCoa`), **\\u201cmore powerful and less ambiguous\\u201d** compared to existing approaches (`c6ps`), and **our proposed CoSA-Score is novel** (`6dSX`).\", \"we_have_provided_detailed_responses_and_additional_experiments_to_address_all_reviewer_concerns\": [\"We have clarified to reviewer `hWMw` that convergence properties of preference optimization and theoretical quantifications of controllability is not the focus and out of scope for this work. Additionally, we have addressed their questions on error-scoring mechanism, risk taxonomy, and resolved the concern on jailbreaking and over-refusal supported by extensive experiments showing positive results.\", \"We have addressed concerns from reviewer `pCoa` by clarifying that CoSAlign initializes from a safety-aligned model.\", \"We have conducted additional experiments combining CoSAlign and Cascade methods, achieving \\u201cthe best of both worlds,\\u201d and resolved reviewer `c6ps`\\u2019s concerns.\", \"We have also clarified reviewer `6dSX`\\u2019s concerns on core contributions and generalization abilities by referring to relevant sections and experiments in the paper.\", \"We believe reviewers\\u2019 concerns are adequately addressed since all 4 ratings are leaning accept. Thanks again for your feedback and engagement!\"]}",
"{\"title\": \"Response to follow-up question\", \"comment\": \"We sincerely appreciate reviewer `c6ps` for your recognition of our response and additional results. In the \\u201cResponse to all reviewers\\u201d section we included the results on the seen split for conciseness, and now we present the exact results on combining CoSAlign with Cascade methods on the unseen split:\\n\\n \\n\\n| Model | CoSA-Score | Help+safe | Help+unsafe | \\n|--------------------------------------|------------|-----------|-------------| \\n| Llama3.1-8B-Instruct | 0.091 | 14.7% | 2.9% | \\n| Llama3.1-8B-Instruct+Cascade | 0.095 | 13.4% | 1.5% | \\n| Llama3.1-8B-Instruct+Cascade-Oracle | 0.119 | 14.7% | **0.0%** | \\n| L3.1-8B-Inst-CoSAlign | 0.293 | 42.8% | 8.0% | \\n| L3.1-8B-Inst-CoSAlign+Cascade | 0.274 | 36.6% | 4.0% | \\n| L3.1-8B-Inst-CoSAlign+Cascade-Oracle | **0.364** | **42.8%** | **0.0%** | \\n\\n \\n\\nAs we see above, similar to the results on the seen split, applying Cascade on CoSAlign-tuned model can effectively reduce the rate of helpful+unsafe responses, at the cost of slightly decreased helpful+safe responses, trading-off helpfulness for increased safety. The combination of CoSAlign and Cascade-Oracle continues to achieve the highest CoSA-Score, helpful+safe responses, and the lowest helpful+unsafe responses across all methods, demonstrating the \\\"best of both world\\u201d when two methods are fused together. \\n\\n\\nThanks again for your response. Please let us know if you have any additional questions or comments, and we are happy to continue this engaging and productive discussion!\"}",
"{\"summary\": \"The current safety alignment of large language models (LLMs) lacks flexibility and requires re-training. To address this, the authors propose CoSAlign, a data-centric method for LLMs to adapt to different safety configurations. They also develop a controllability evaluation protocol that considers helpfulness and configured safety, summarizing them into CoSA-Score. They construct CoSApien, a human-authored benchmark based on real-world LLM use cases with diverse safety requirements. CoSAlign leads to substantial gains in controllability over strong baselines, encouraging better representation and adaptation to pluralistic human values, increasing LLMs' practicality.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"+ The proposed CoSAlign method offers flexibility by allowing models to be adapted to diverse safety requirements without the need for retraining.\\nThe introduction of a new controllability evaluation protocol that balances both helpfulness and safety, summarized into a single CoSA-Score, is novel.\", \"weaknesses\": [\"Some parts are not easy to follow.\", \"The core contribution of the work is not clear. It is more like multiple small points mixed up.\", \"While the method offers flexibility with safety configs, it may face challenges in generalizing across highly diverse or conflicting safety requirements.\"], \"questions\": \"1. Section 3: The evaluation protocol is like the one shown in \\\"Lin, Stephanie, Jacob Hilton, and Owain Evans. \\\"Truthfulqa: Measuring how models mimic human falsehoods.\\\" arXiv preprint arXiv:2109.07958 (2021).\\\" What are the advantages of your protocol?\\n\\n2. Section 5: The CoSAlign pipeline is more like a rule-based method. I doubt its generalization ability in real-world deployment. Moreover, this part is kind of messy and not easy to follow.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": [\"The paper introduces CoSA, a framework designed to adapt LLMs to diverse, context-sensitive safety requirements in real-time without retraining.\", \"The topic is of interest.\", \"The reviewers found the proposed CoSA framework to be interesting.\", \"Some of the technical details were missing or hard to follow.\"], \"additional_comments_on_reviewer_discussion\": [\"There were some issues raised in the initial reviews, including\", \"the error scoring mechanism\", \"convergence properties of preference optimization\", \"initialization strategy\", \"additional experiments combining different methods\", \"generalization abilities.\", \"These issues were partly addressed in the rebuttal. All the reviewers now recommend acceptance. The authors should incorporate the clarification in the rebuttal in the camera ready version.\"]}",
"{\"title\": \"Thanks for the response\", \"comment\": \"Dear Authors,\\n\\nThank you for your clarifications. Based on the current state of the work, I am inclined to slightly increase my score. However, I fell that the key points I raised should have been addressed in the first version hence not able to increase much.\\n\\nI recommend that the authors incorporate these discussions and the referenced citations into the appendix in future revisions.\\n\\nThanks.\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We sincerely appreciate all reviewers for their detailed review. We are glad that reviewers find our proposed framework \\u201cvaluable for applications with varied cultural, legal, or organizational safety needs\\u201d and are \\u201cof practical importance for effective and safe usage of LLMs in diverse applications.\\u201d Reviewers find our proposed CoSAlign method, which \\u201cproperly incorporates both safety and helpfulness,\\u201d is important, and commended our human-authored CoSApien benchmark as well as the novel CoSA-Score evaluation protocol.\\n\\nWhile we believe we have provided a comprehensive set of experiments that fully justify and support our contributions, we chose to provide additional experimental results to answer the specific questions from Reviewers `hHMw` (on jailbreak & over-refusal) and `c6ps` (on Cascade methods). While we have already conducted general safety evaluations in Section 6.2 (see Table 6 for details), and one of the benchmarks (StrongReject [1]) specifically focuses on jailbreaks, we now conduct additional experiments on jailbreak attacks and over-refusal for further soundness. Moreover, we now show the applicability of combining CoSAlign and Cascade methods. Note that because the Cascade methods are expensive, we still argue that CoSAlign (without Cascade) is the best method in practice. Please find the results below. \\n\\n\\n## Additional experiments on jailbreak attacks and over-refusal\\nIn this section, we show that CoSAlign **significantly improves adversarial robustness against jailbreak attacks and reduces over-refusal** compared to the base model. \\n\\nWe use off-the-shelf Llama3.1-8B-Instruct as the baseline, and evaluate the CoSAlign-tuned variant using the safety config that indicates no type of risk is allowed. We conduct the popular GPTFuzzer [2], PAIR [3], and TAP [4] attacks on Llama3.1-8B-Instruct and our Llama3.1-8B-Instruct+CoSAlign models and report **Attack Success Rate** (lower is better) as the metric. All experiments are conducted on HarmBench [5], a standardized testing framework designed to enable fair comparisons. The results are summarized below: \\n\\n| Model | GPTFuzzer | PAIR | TAP | \\n|-------------------------------|-----------|----------|----------| \\n| Llama3.1-8B-Instruct | 68.8 | 26.3 | 32.5 | \\n| Llama3.1-8B-Instruct+CoSAlign | **38.8** | **23.8** | **18.8** | \\n\\nSurprisingly, we find that CoSAlign not only avoids degrading adversarial robustness but also **significantly enhances** it against popular jailbreak attacks. This result suggests that the improved safety controllability provided by CoSAlign does not conflict with adversarial robustness. We hypothesize that CoSAlign's enhanced controllability may implicitly strengthen the model\\u2019s safety reasoning capabilities, thereby making it more robust to attacks designed to \\u201ctrick\\u201d LLMs into engaging in disallowed behaviors. \\n\\nWe also conduct experiments to investigate the over-refusal rate on XSTest [6] before and after applying CoSAlign. We report results on the safe subset, where no prompt should be refused, and present refusal rates (lower is better): \\n\\n| Model | Full Refusal | Partial Refusal | Overall Refusal | \\n|-------------------------------|--------------|-----------------|-----------------| \\n| Llama3.1-8B-Instruct | 8% | 0% | 8% | \\n| Llama3.1-8B-Instruct+CoSAlign | 1.6% | 1.2% | **2.8%** | \\n\\nResults show that **CoSAlign significantly reduced the over-refusal rate** of Llama3.1-8B-Instruct. This indicates that the enhanced safety controllability provided by CoSAlign helps the model better reason about which prompts should be refused and which should not.\"}",
"{\"title\": \"Response to reviewer pCoa - continued\", \"comment\": \"> Line 268: Are the training prompts corresponding to or independent of any safety configs used in the training? Can the risk taxonomy made in the paper used in place of the other prior taxonomies, or is it specially tailored to CoSA?\\n\\nIn our current pipeline, the training prompts correspond to the training safety configs because we first derive the risk taxonomy base on the training prompts (Appendix A.2), and then synthesize relevant safety configs (Line 302). But in principle, the training prompts do not need to curate together with the risk taxonomy as long as the taxonomy covers a broad range of risks relevant to the training prompts. Our derived taxonomy is not specifically tailored to CoSA and can be used in place of other prior taxonomies. \\n\\n \\n\\n> Is $C_{i, j} \\\\in R$, in line 313? \\n\\nYes, each config risk category is a subset of the taxonomy $R$. \\n\\n \\n\\n> How is judge-help conceptually different from judge-addr? \\n\\nWhile judge-help evaluates the helpfulness of a response in detail and gives a score of 0 to 5, judge-addr only considers whether the response is a refusal or not. It does not consider \\u201chow well\\u201d the model answers the response. \\n\\n \\n\\n> Line 350: Why are allowed risks penalized by $\\\\alpha$? I think they should be rewarded, as the model is adhering to the safety config. \\n\\nGreat question! We argue that models should only use allowed risks *as needed* in order to achieve better helpfulness. For example, if violent content is allowed in the videogame setting, the model should not be rewarded for producing violent descriptions on prompts where violence is not needed. Therefore, we give a small but positive penalty for allowed risks. \\n\\n \\n\\n> Line 363: Why is the adversarial partition of WildGuardTrain removed from the training set? \\n\\nWe made this design choice to limit the number of training prompts due to our compute resources. Nevertheless, as shown in additional results in \\u201cResponse to all reviewer\\u201d section, CoSAlign achieves increased robustness against adversarial attacks. \\n\\n \\n\\n> Line 407: Does \\\"helpful\\\" mean that the judge-help outputs anything > 0 or equal to 1? Similarly, what values of the helpfulness scores from humans are considered to be \\\"helpful\\\" in Table 4? \\n\\nWe\\u2019d like to clarify that (detailed in Appendix A.6) for both the GPT-4 and human judges, raw scores are given on the scale of 0 to 5. A response is considered helpful if judge-help output $\\\\geq$ 1. This ensures consistent results between GPT-4 and human judges. \\n\\n \\n\\n## References \\n\\n[1] [Investigating Cultural Alignment of Large Language Models](aclanthology.org/2024.acl-long.671) \\n\\n[2] [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/abs/2212.08073) \\n\\n[3] [Collective Constitutional AI: Aligning a Language Model with Public Input](https://arxiv.org/abs/2406.07814) \\n\\n[4] [A proposal for importing society\\u2019s values](https://aligned.substack.com/p/a-proposal-for-importing-societys-values) \\n\\n[5] [We're Afraid Language Models Aren't Modeling Ambiguity](https://arxiv.org/abs/2304.14399)\"}",
"{\"title\": \"Response to reviewer 6dSX\", \"comment\": \"We sincerely appreciate reviewer `6dSX` for their thoughtful review. We are glad that you find our method flexible and our evaluation protocol novel. We hope our response addresses your questions and concerns:\\n\\n## Response to Weaknesses \\n\\n> The core contribution of the work is not clear. It is more like multiple small points mixed up. \\n\\nSummarized in Line 92-96, our core contributions are introducing the Controllable Safety Alignment framework and formulating the task of efficiently adapting models to diverse safety requirements at inference time. Our framework rethinks the current paradigm of safety alignment and enables LLMs to be responsibly deployed in diverse settings. \\n\\nTo construct this comprehensive framework, we propose the CoSA-Score evaluation protocol, a human-curated benchmark CoSApien, and CoSAlign, a method for improving the controllability of LLMs. We believe all of these contributions are crucial for effectively adapting LLMs to diverse use cases. By providing this comprehensive set of artifacts, we enable rich future work on controllable safety alignment as pointed out in our discussion section. \\n\\n \\n\\n> While the method offers flexibility with safety configs, it may face challenges in generalizing across highly diverse or conflicting safety requirements. \\n\\nWe have already conducted experiments in Section 6.1 (discussed in Line 450-464) to evaluate the impact of the distribution shift from simpler safety configs to more complex ones. We refer the reviewer to that section of a detailed discussion. We summarize it here again for your convenience: we train on CoSAlign-Train, which only contain synthetic categorical risk categories, and test on CoSApien, our proposed benchmarks consist of complex, fine-grained real-world configs (see examples in Appendix A.12). Human evaluation (Table 4) shows that CoSAlign still possesses strong controllability in safety configs that are more complex than those the model was trained on, and outperforms all baselines. This shows CoSAlign\\u2019s strong generalization ability from simpler training configs to more complex test configs. \\n\\n \\n\\nFinally, if there is a specific section you found unclear or uneasy to follow, please don\\u2019t hesitate to let us know! \\n\\n## Response to questions \\n\\n> Section 3: The evaluation protocol is like the one shown in \\\"Lin, Stephanie, Jacob Hilton, and Owain Evans. \\\"Truthfulqa: Measuring how models mimic human falsehoods.\\\" arXiv preprint arXiv:2109.07958 (2021).\\\" What are the advantages of your protocol? \\n\\nWe\\u2019d like to clarify that while both our protocol and TruthfulQA use LLM as a judge, CoSA-Score use two separate judge-help and judge-safe to measure helpfulness and configured safety, and summarizing them into a single score that take account of both aspects. We argue this is a novel way to depict the tradeoff between helpfulness and safety aspects that allows distinguishing nuances such as a response that is both unsafe and helpful should be penalized more than a response that is unsafe but not helpful (for example, if the prompt requests criminal advice, a highly helpful response would likely cause greater harm to society than one that is less helpful). \\n\\n \\n\\n> Section 5: The CoSAlign pipeline is more like a rule-based method. I doubt its generalization ability in real-world deployment. Moreover, this part is kind of messy and not easy to follow. \\n\\nWe\\u2019d like to clarify that because CoSAlign selects response pairs through the error score derived from LLM judge outcomes, it is not a rule-based method. On the other hand, a recent work on rule-based rewards has recently shown to be effective for controlling nuanced aspects of safety [1], so we agree there is indeed room to explore in the rule-based space. \\n\\nRelated to CoSAlign\\u2019s generalization ability in real-world development, we have created the human-authored benchmark, CoSApien, with the help of professional red teaming specialists who are adept at real-world safety scenarios (detailed in Section 4). Thus we believe that CoSApien covers common safety scenarios in practice. In Table 4, Section 5.2, human evaluation shows that CoSAlign not only maintains high controllability but also outperforms all baselines, demonstrating the real-world applicability of our method. Due to the space constraints, we had to move some details of CoSAlign to the appendix, but we will continually iterate to improve the clarify in this part. \\n\\n \\n\\nWe also conducted additional experiments on adversarial robustness, over-refusal, and combining CoSAlign with Cascade methods. Please find more details in the **\\u201cResponse to all reviewers\\u201d** section. Thank you! \\n\\n \\n## References \\n[1] [Rule Based Rewards for Language Model Safety](https://cdn.openai.com/rule-based-rewards-for-language-model-safety.pdf)\"}",
"{\"title\": \"Response to all reviewers - continued\", \"comment\": \"## Additional experiments on combining CoSAlign and Cascade methods\\n\\nIn this section, we show that **Cascade methods can be effectively incorporated into CoSAlign-tuned models** to trade off helpfulness for safety. \\n\\n \\n\\nBecause the Cascade methods use a filtering model to label unsafe responses and replace them with refusals, they can also be applied on top of CoSAlign. Below, we demonstrate the effectiveness of CoSAlign+Cascade on the seen split of CoSAlign-Test (the unseen split follows the same pattern): \\n\\n| Model | CoSA-Score | Help+safe | Help+unsafe | \\n|--------------------------------------|------------|-----------|-------------| \\n| Llama3.1-8B-Instruct | 0.182 | 23.7% | 2.0% | \\n| Llama3.1-8B-Instruct+Cascade | 0.171 | 21.9% | 1.6% | \\n| Llama3.1-8B-Instruct+Cascade-Oracle | 0.201 | 23.7% | **0.0%** | \\n| L3.1-8B-Inst-CoSAlign | 0.408 | 52.0% | 5.2% | \\n| L3.1-8B-Inst-CoSAlign+Cascade | 0.368 | 45.5% | 3.0% | \\n| L3.1-8B-Inst-CoSAlign+Cascade-Oracle | **0.454** | **52.0%** | **0.0%** | \\n\\nResults show that, similar to applying Cascade on the base Llama-3.1-8B model, applying Cascade on CoSAlign-tuned model can also reduce the rate of helpful+unsafe responses. Cascade lowers helpful+unsafe responses at the cost of decreasing helpful+safe responses, trading off helpfulness for better safety. We acknowledge that while CoSAlign+Cascade led to slightly more helpful+unsafe responses than Llama3.1-8B-Instruct+Cascade, the gap is small and CoSAlign lead to significantly improved helpful+safe responses, and thus a much higher CoSA-Score. \\n\\nIn summary, applying Cascade-Oracle on CoSAlign achieves the highest CoSA-Score, the highest helpful+safe responses, and the lowest helpful+unsafe responses across the board, demonstrating the effectiveness of combining both methods. \\n\\n \\n\\n## References \\n\\n[1] [A StrongREJECT for Empty Jailbreaks](https://arxiv.org/abs/2402.10260) \\n\\n[2] [GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts](https://arxiv.org/abs/2309.10253) \\n\\n[3] [Jailbreaking Black Box Large Language Models in Twenty Queries](https://arxiv.org/abs/2310.08419) \\n\\n[4] [Tree of Attacks: Jailbreaking Black-Box LLMs Automatically](https://arxiv.org/abs/2312.02119) \\n\\n[5] [HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal](https://arxiv.org/abs/2402.04249) \\n\\n[6] [XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models](https://arxiv.org/abs/2308.01263)\"}",
"{\"comment\": \"Thank you for providing the results for these experiments.\\nMy initial worry was that CoSAlign improves helpfulness but compromises safety too much, as trading the relatively robust safety of cascade in favor of helpfulness feels like a potentially problematic tradeoff.\\nHowever, the combined method of CoSAlign+cascade does appear to greatly improve helpfulness without compromising safety too much compared with other methods.\\nWith this my concerns are addressed, and after going over the other reviews and responses, I have decided to raise my score.\"}",
"{\"summary\": \"This paper addresses the problem of having custom safety configurations (configs) for diverse uses of LLMs. The primary method uses preference-learning to align the LLMs such that they can follow the custom safety configs provided in their system prompt during inference. The results show increased safety controllability over existing approaches, suggesting the efficacy of the method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents an interesting instance of preference learning applied to have customized safety, which may be of practical importance for effective and safe usage of LLMs in diverse applications.\\n2. The work contributes a new human-designed benchmark, CoSApien, for controllable safety alignment.\\n3. The claims of the paper are well-supported by human evaluations to show correctness of the assessments of the methods.\", \"weaknesses\": \"1. It may not be feasible to specify all aspects of safety, even by domain experts, in natural language. It's tedious and requires a lot of manual effort. Hence, the authors should consider and report the practical manual overhead of their alignment method, compared to the traditional methods where there are universal notions of safety to some general extent and it does not need to be redefined everytime (especially the commonly desired configs).\\n2. I don't see how the safety setting can be significantly different from the non-safety setting studied in other similar works on plurality alignment, necessitating the methods of the paper.\\n3. To determine \\\"disallowed\\\" content, we would need some kind of a bigger set of safety guidelines, from which those corresponding to the specified safety configs are removed and the remaining are disallowed. There is no mention of such a set of safety guidelines. Specifically, how would one systematically design prompts that elicit disallowed content, such as that in line 180?\\n4. I think it may be advantageous to evaluate CoSApien also with the kinds of content that an actual LLM can generate for the different kinds of prompts, for a real proof of its prompts being effective. \\n7. There is a lack of details about how the risk taxonomy is constructed. Line 295 says it is made from training data, but doesn't mention how. The footnote on page 6 says that the taxonomy consists of fewer categories and shorter definitions, which suggests that it is human-made. \\n8. The experiments are limited to Llama-3.1 8B and GPT-4o models. It would be interesting to see how CoSAlign would work with other LLMs such as Mistral. \\n9. I would suggest that the authors add some justification for not doing DPO on GPT-4o around line 472.\\n10. The authors should discuss the generally lower (hence better) helpful+unsafe values of the Cascade method over CoSAlign. \\n12. Terminology used before definition:\\n 1. Line 87: \\\"training prompts\\\"\\n 2. Line 88: \\\"error-scoring mechanism\\\". What is *error* here?\\n 3. Line 235: although the section for CoSAlign-Test is given, I think it should be described before using. \\n 4. Line 343: \\\"data generator model\\\"? Is it different from a language model?\\n 5. Line 484: \\\"*overgeneralization* to disallowed content\\\"?\\n13. Typos:\\n 1. Line 146: \\\"provide\\\" -> \\\"provider\\\"\\n 2. Line 163: \\\"non-authorized\\\" -> \\\"authorized\\\"\\n 3. Table 1 caption: \\\"deteriorates\\\" -> \\\"deteriorated\\\"\\n 4. Line 372: \\\"controllability\\\" -> \\\"controllable\\\"\\n 5. Table 3 has repeated setups 1 and 3 under CoSAlign methods.\\n 6. The legend for Figure 5 says that the red bars are for ICA, but the text says that it is for SFT. I think all of ICA, SFT, and CoSAlign should be shown in that figure.\", \"questions\": \"1. What would happen for ambiguous or contradictory safety configs? What if they contradict the system prompt that follows the safety configs during inference?\\n3. Line 114-115: \\\"However, because of the complexity of\\nsafety configs and the difficulty of constructing high-quality demonstrations at scale\\\" - the complexity of safety configs is a problem for this work too. Moreover, in-context learning requires only a handful of demonstrations, hence the argument of absence of demonstrations *at scale* is void. Hence, I think these arguments against in-context learning are not very strong.\\n4. Line 183: How do you check/verify coverage?\\n5. How would GPT-4's own safety alignment affect its responses as judge-safe(.) in line 188?\\n6. Can the CoSA-Score be made more useful by assigning -1 to refusals/unhelpful responses?\\n7. Line 268: Are the training prompts corresponding to or independent of any safety configs used in the training?\\n8. Can the risk taxonomy made in the paper used in place of the other prior taxonomies, or is it specially tailored to CoSA?\\n9. Is $C_{i,j}\\\\in R$, in line 313?\\n10. How is judge-help conceptually different from judge-addr?\\n11. Line 350: Why are allowed risks *penalized* by $\\\\alpha$? I think they should be rewarded, as the model is adhering to the safety config.\\n12. Line 363: Why is the adversarial partition of WildGuardTrain removed from the training set?\\n13. Line 405: Manually checking that the test set contains all the 3 kinds of prompts is ok, but what is ensuring that to be the case in general for all the prompts? As far as I understand, it is just a dataset containing some prompts, not necessarily according to some safety configs.\\n14. Line 407: Does \\\"helpful\\\" mean that the judge-help outputs anything > 0 or equal to 1? Similarly, what values of the helpfulness scores from humans are considered to be \\\"helpful\\\" in Table 4?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer c6ps - continued\", \"comment\": \"## Response to questions\\n\\n> Is there some way to combine the cascades method with the CoSAlign method? \\n\\nYes! The Cascade method can be naturally incorporated into any model because it adds a post-hoc filtering step, which replaces unsafe responses deemed by the filtering model with refusals. This is similar to what deployed systems usually do (e.g., OpenAI\\u2019s content filter on ChatGPT). Although the filtering model is usually smaller than the generator model, we use (1) the generator model itself (2) the evaluator model as filtering model to construct this strong baseline. We have included results of combining Cascade with CoSAlign in the **\\u201cResponse to all reviewers\\u201d** section.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}"
]
} |
ERcGlGIM2D | BLIPEE: Fast and Robust BLIP with Adversarially Trained Early Exits | [
"Divya Jyoti Bajpai",
"Manjesh Kumar Hanawal"
] | In recent years, Vision-Language Models (VLMs) have shown remarkable performance improvements in vision-language tasks. However, their large size poses challenges for real-world applications where inference latency is a concern. To tackle this issue, we propose employing Early Exit (EE) strategies in VLM. However, training exit classifiers in VLMs is challenging, particularly with limited labeled training data. To address this, we introduce BLIPEE, an adversarial training approach within a GAN-based framework. Here, each exit consists of a transformer layer and a classifier, and the transformer layer is adversarially trained to produce feature representations similar to the final layer, while a feature classifier serves as the discriminator. Our method focuses on performing input-adaptive inference that mitigates the overthinking issue and increases inference speed. Experimental results demonstrate the effectiveness of our approach in enhancing accuracy and model robustness by mitigating overthinking and the phenomenon of mid-crisis that we highlight. The anonymized source code is available at https://anonymous.4open.science/status/BLIPEE-3ED3. | [
"Early Exits; Multimodal model"
] | https://openreview.net/pdf?id=ERcGlGIM2D | https://openreview.net/forum?id=ERcGlGIM2D | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zw4b7Yd6Jk",
"zS1CPoSKlV",
"u04ReF9KBl",
"s50sJ4GRVx",
"pR2TvlZ0x4",
"kBuhoS6Ms0",
"gpNDiFNXrb",
"dDYeyI1hKG",
"c3lKeUqB2D",
"Z3uDSJuVFa",
"VQZKJ62iUy",
"OyiYpa8l3c",
"LSgspfOtMk",
"JAkROwvCKd",
"7CVQ4ROJkm",
"4d03NRN9HC",
"3iDmXxFTBs"
],
"note_type": [
"comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1734341994162,
1732643668822,
1730609661859,
1732274801507,
1730382692244,
1732275227296,
1732487348918,
1732507050938,
1732414891789,
1732274943549,
1730730023984,
1732995529559,
1732275057139,
1732474075335,
1730603344843,
1733220378268,
1732473995315
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11663/Reviewer_ofR1"
],
[
"ICLR.cc/2025/Conference/Submission11663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11663/Reviewer_cURp"
],
[
"ICLR.cc/2025/Conference/Submission11663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11663/Reviewer_ofR1"
],
[
"ICLR.cc/2025/Conference/Submission11663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11663/Reviewer_ofR1"
],
[
"ICLR.cc/2025/Conference/Submission11663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11663/Reviewer_KQvs"
],
[
"ICLR.cc/2025/Conference/Submission11663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11663/Reviewer_g7un"
],
[
"ICLR.cc/2025/Conference/Submission11663/Reviewer_cURp"
],
[
"ICLR.cc/2025/Conference/Submission11663/Authors"
]
],
"structured_content_str": [
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"title\": \"Reminder\", \"comment\": \"Dear reviewers,\\n\\nIt is a gentle reminder to acknowledge our rebuttal and make changes accordingly.\\n\\nRegards,\\n\\nAuthors\"}",
"{\"summary\": \"The paper presents an early exit strategy for training VLMs. The key idea is to attach exits across different language model layers, with each exit consisting of a transformer layer and a classifier. The transformer layers and the classifiers are trained through a GAN-based framework such that the transformer layers generate feature representations similar to the last layer. Training consists of (1) backbone fine-tuning and (2) exit training. For exit training, a semi-supervised setup and an unsupervised setup are discussed to train the transformer layer to generate features similar to the final layers. During inference, captions are generated in an autoregressive manner. Experimental results show that the proposed method outperforms prior early exit methods with less computational cost.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The motivation is clear; the mid-crisis and overthinking phenomenon is intriguing.\", \"Early exit could also provide some insights into the reasoning mechanisms of LLMs, as shown in Figure 3.\", \"The idea is novel; using a GAN-based method for early exit is interesting and seems effective.\"], \"weaknesses\": [\"The motivation for backbone fine-tuning is unclear and not explained. Why not use a pre-trained backbone? Does it help early exit?\", \"Most of the baselines come from earlier works. The baseline from recent VLM works, such as LLaVA, miniGPT-4, etc. are missing.\", \"According to Tables 1 and 2, the performance improvement seems incremental. Instead of the speedup calculated from L323, what is the speedup on the hardware specify in the paper? Does actual speedup align with this calculation?\"], \"minor_issue\": \"Citations within the text are strange, check the formatting instruction.\", \"questions\": \"1. What is the speedup on hardware? Does early exit also speedup the causal self-attention (autoregressive) model?\\n2. What is $w_i$ in L342?\\n3. In Eqn (2), the last $y*$ should be $y*_{1:t-1}$?\\n4. Does other GAN-framework work? Such as WGAN?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the insightful comments.\", \"que_1\": \"The topic is too limited. The early exits issue is a good question for the existing VLM models, especially for large VLM models. The authors just implement the early exits strategy on a single BLIP model, limiting its scalability.\", \"ans_1\": \"Yes, our method is scalable to multiple existing VLMs, however, we chose to perform experiments on BLIP-2 model as BLIP-2 provides the flexibility to use a wide variety of encoder and decoder models, such as we have used OPT 2.7B and 6.7 B and FlanT5 models as the decoder in our method. This makes our method general to any kind of VLM as they also consider a combination of encoder and decoder models and the idea is to show that our method can work under such scenarios, hence, we have used BLIP-2 that provides an option to use multiple decoders.\", \"que_2\": \"The illustrations of the proposed components are not clear. The authors should re-organize the method section for better presentation. A algorithmic pseudo-code of the entire process should also be provided.\", \"ans_2\": \"Sure, we will take up your suggestion and add an algorithm pseudocode into the final version of the paper, we will further clarify the use of the different components of our paper.\", \"que___3\": \"The experiments are insufficient. The authors compare the efficiency of their BLIPEE with other large VLM like Flamingo, however, the authors do not apply their EE strategy to Flamingo for \\u201cplug-and-play\\u201d comparison.\", \"ans___3\": \"Existing works such as DeeBERT, PABEE only compare against the early exiting baselines and the original model (comparing against the vanilla BERT and existing EE approaches applied to BERT and not against GPT-2, LLama kind of models) that motivates us to only compare our approach against BLIP-2 vanilla inference and other existing EE methods applied to BLIP-2 . We have compared with OFA and Frozen as they are compared in BLIP-2 just to show that even after becoming faster our method has not reduced the performance of\\nBLIP-2. \\n\\nPlease note that we propose an EE method and not a VLM itself hence comparison is against the existing EE methods.\", \"que_4\": \"Since the proposed method relies on additional transformer and classify layers, the authors should provide the comparison on model complexity.\", \"ans_4\": \"We have provided the comparison on computational complexity, the speedup reported is a metric that considers the computational complexity of our method. The speedup metric is proportional to the computational requirements of the model. We have also added the parameter cost of additional exits during the speedup calculation. Not only this, we have also reported the model size.\\n\\nWe hope that we clarified most of your doubts. If you have any further questions please let us know, else please consider reassessing the scores.\"}",
"{\"summary\": \"This paper proposes an early exit strategy to reduce the inference latency in Vision-Language Models. An adversarial training network within a GAN-based framework BLIPEE is utilized to reduce the negative impact of limited labeled training data. In the BLIPEE network, each exit contains a transformer layer and a classifier. The used input-adaptive inference mitigates the overthinking issue and increases inference speed. Experimental results show the effectiveness of the proposed BLIPEE. Authors provide anonymized source codes.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Some codes are provided to increase credibility of the BLIPEE network.\\n\\nVarious results show the effectiveness of the BLIPEE network. The designed method can improve the inference speed while yielding high-quality outputs.\\n\\nTables and figures are clear. I can understand them easily.\\n\\nLimitation section is provided to present the work comprehensively.\", \"weaknesses\": \"The compared methods are not state-of-the-art. The newest compared methods (OFA and Flamingo) are published in 2022. Some state-of-the-art works are required for comparison.\\n\\nIn Table 2, BLIPEE-V-O and BLIPEE-V-F contain more Train Params than BLIP-2 V-O and BLIP-2 V-F. Why BLIPEE-V-O and BLIPEE-V-F have higher Spd than BLIP-2 V-O and BLIP-2 V-F?\\n\\nIn Figure 2, \\\"Layers\\\" should be \\\"Layer number\\\".\\n\\nSome references needs to be revised, such as Li et al. (2020) in Line 634-642.\\n\\nSome grammatical errors, such as \\\"P_N denote the probability score ...\\\" in Line 215.\", \"questions\": \"Please address the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the insightful comments.\", \"ques_1\": \"The compared methods are not state-of-the-art. The newest compared methods (OFA and Flamingo) are published in 2022. Some state-of-the-art works are required for comparison.\", \"ans_1_the_reason_of_not_comparing_with_the_sota_methods_is\": \"1) We are not proposing a new VLM that can beat SOTA. 2) Our objective is to fasten the existing BLIP-2 VLM model. We chose BLIP-2 as it provides flexibility to use any encoder and decoder. 3) We have compared with OFA and Flamingo as they are compared in BLIP-2 just to show that even after becoming faster our method has not reduced the performance of BLIP-2.\", \"ques_2\": \"In Table 2, BLIPEE-V-O and BLIPEE-V-F contain more Train Params than BLIP-2 V-O and BLIP-2 V-F. Why BLIPEE-V-O and BLIPEE-V-F have higher Spd than BLIP-2 V-O and BLIP-2 V-F?\", \"ans_2\": \"We have added exits to the intermediate layers that adds on to the parameters of the backbone during training. But during inference, not all layers are required for prediction and a sample can exit earlier before even reaching till the final layer. As soon as a sample passes through a layer with an exit, the confidence of that layer in the prediction is checked and if the exit is confident enough, the sample exits the backbone without going deeper into the backbone. Thai speeds up the inference process.\", \"presentations_issues\": \"In Figure 2, \\\"Layers\\\" should be \\\"Layer number\\\".\\nSome references needs to be revised, such as Li et al. (2020) in Line 634-642.\\nSome grammatical errors, such as \\\"P_N denote the probability score ...\\\" in Line 215\\nThanks for pointing out, we will fix these in the final version.\\n\\nWe hope that we clarified most of your doubts, if you have any further questions, please let us know. If not please consider reassessing the scores as it seems you liked our work except some minor issues that can be easily fixed.\"}",
"{\"comment\": \"could you please provide more details about the experiment as well? Final exit is 660.2 and early exit is 297.8 which seems much better than theoretical speedup.\"}",
"{\"title\": \"Further details\", \"comment\": \"Sure\\n\\nThe model used is BLIP-2ViT- FlanT5 xl and other setups are very same as detailed in the paper, we performed inference on the MScoco dataset with attached exits and got these results. Actually the speedup and time reduction both are consistent i.e., they both do not deviate much. If you need any specific info, please let us know.\\n\\nThanks\"}",
"{\"title\": \"Thanks for reply\", \"comment\": \"Is it possible to conduct an experiment using the specified hardware mentioned in the paper to compare the theoretical speedup with the actual speedup? Thank you!\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the insightful comments.\", \"que_1\": \"The motivation for backbone fine-tuning is unclear and not explained. Why not use a pre-trained backbone? Does it help early exit?\", \"ans_1\": \"The backbone fine-tuning is required as the additional transformer layer in the exits needs to be trained for that a fine-tuning is required. If randomly initialized the performance came out to be nearly zero for initial layers. Hence a fine-tuning is required for most of the early exit methods.\", \"que__2\": \"Most of the baselines come from earlier works. The baseline from recent VLM works, such as LLaVA, miniGPT-4, etc. are missing.\", \"ans_2\": \"As in our method, we are adding exits to the BLIP-2 decoder models, the main baselines are existing early exit methods and the vanilla BLIP-2 inference, we have added other baselines just to give a sense that BLIP-2 is faster with EE methods with comparable performance. We cannot directly compare the existing baselines with an EE model, but the major baseline is how fast the model inference is with what loss in performance as compared to existing EE methods and the vanilla inference. The main reason of this argument is that we are proposing a EE method and not a VLM backbone, hence we have not considered them as baselines. We have compared with OFA and Flamingo as they are compared in BLIP-2 just to show that even after becoming faster our method has not reduced the performance of BLIP-2\", \"ques_3\": \"According to Tables 1 and 2, the performance improvement seems incremental. Instead of the speedup calculated from L323, what is the speedup on the hardware specify in the paper? Does actual speedup align with this calculation?\", \"ans_3\": \"Note that our method is pushing two metrics simultaneously due to which the performance needs to be judged in both ways. Our goal is to reduce the trade-off impact of accuracy and efficiency i.e., the model provides faster inference while having the smallest decrement in the performance. In all the existing methods, we are the ones that best do it and here we claim the novelty and observe that in terms of speedup our method is better than existing methods and in terms of performance the drop is minimal.\\n\\nSpeedup is a standard metric used for measuring the efficiency of EE models; it has already been used in various existing works. Speedup is proportional to any kind of hardware, which helps in a fair comparison as the actual hardware time might vary in various runs. This has been already explained in previous literature.\", \"ques_4\": \"What is the speedup on hardware? Does early exit also speedup the causal self-attention (autoregressive) model?\", \"ans_4\": \"The speedup metric reported is proportional to hardware however, we will report the actual time but the execution might take some time. Yes, in the OPT model which is autoregressive in generating text where we attached exits shows speedup in performance.\\n\\n$W_i$ denote the number of words that exit from the ith exit of the decoder.\", \"ques_5\": \"In Eqn (2), the last y\\u2217 should be y\\u22171:t\\u22121?\\n\\nAns-5 Yes, that is a typo, thanks for pointing it out, we will fix this in the final version.\", \"ques_6\": \"Does other GAN-framework work? Such as WGAN?\\n\\nAns-6 We have not explored that yet, but we intuitively sense that any GAN method can perform well that can generate high-quality features using its framework with a simple generator in our case which is a single transformer layer.\\n\\nWe hope that we clarified most of your doubts. If you have any further questions, please let us know, else please consider reassessing the scores.\"}",
"{\"summary\": \"This paper proposes a very interesting issue of early exists on existing VLM models. To achieve this goal, it introduces an adversarial training approach within a GAN-based framework. Specifically, a transformer layer is utilized to mimic the output features of original VLM, and a classifier is utilized to determine when to exist. Experiments demonstrate the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The motivation of this paper is valuable.\\n\\n2. This paper is easy to read and well-written.\\n\\n3. The proposed components are reasonable.\", \"weaknesses\": \"1. The topic is too limited. The early exits issue is a good question for the existing VLM models, especially for large VLM models. The authors just implement the early exits strategy on a single BLIP model, limiting its scalability.\\n\\n2. The illustrations of the proposed components are not clear. The authors should re-organize the method section for better presentation. A algorithmic pseudo-code of the entire process should also be provided.\\n\\n3. The experiments are insufficient. The authors compare the efficiency of their BLIPEE with other large VLM like Flamingo, however, the authors do not apply their EE strategy to Flamingo for \\u201cplug-and-play\\u201d comparison.\\n\\n4. Since the proposed method relies on additional transformer and classify layers, the authors should provide the comparison on model complexity.\", \"questions\": \"1. The topic is too limited. The early exits issue is a good question for the existing VLM models, especially for large VLM models. The authors just implement the early exits strategy on a single BLIP model, limiting its scalability.\\n\\n2. The illustrations of the proposed components are not clear. The authors should re-organize the method section for better presentation. A algorithmic pseudo-code of the entire process should also be provided.\\n\\n3. The experiments are insufficient. The authors compare the efficiency of their BLIPEE with other large VLM like Flamingo, however, the authors do not apply their EE strategy to Flamingo for \\u201cplug-and-play\\u201d comparison.\\n\\n4. Since the proposed method relies on additional transformer and classify layers, the authors should provide the comparison on model complexity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Reminder\", \"comment\": \"Dear reviewers,\\n\\nIt is a gentle reminder to acknowledge our rebuttal and make changes accordingly.\\n\\nRegards,\\n\\nAuthors\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the insightful comments.\", \"ques_1\": \"Confusing symbols in fig1, why Classifier N, what\\u2019s meaning of D1/D2?\", \"ans_1\": \"Classifier N means the final classifier that is the classifier attached after the final layer of the model. D_1/D_2 are the discriminators at different layers at layers 1 /2 respectively.\", \"ques_2\": \"Unsupervised manner is not novel, just self-labeling, but their pseudo labels are not accurate enough in general.\\n\\nAns-2 Yes, the pseudo labels might not be very much accurate in general but they re a very good substitute of the true labels. Also, please note that we are not claiming novelty in the fact that we proposed the method of self-labeling but we are claiming novelty that we are the first ones to use it for an early exit model. Learning in unsupervised manner is a major problem in EEVLMs as most of the VLMs have good zero-shot capabilities so do not require training data. But as we add exits to initial layers there is a requirement of training data, it restricts the exit attachment to the backbone and making the process fast. To address this, we proposed BLIPEE that can even perform comparably when there is less or no training dataset.\", \"ques_3\": \"Ablation study is not enough, whether or not the proposed adversarial training is necessary?\", \"ans_3\": \"The necessity of adversarial training is justified by the objective that we need to meet. We want to generate samples from a given distribution with a given architecture, this is a similar setup to GAN where we need to generate images from a given distribution. If we remove the adversarial part, the method boils down to DeeBLIP, PABEE-BLIP methods already added in our baselines.\", \"ques_5\": \"Technical contributions are not enough, no matter of adversarial training or knowledge distillation are very commonly-used skills, no more new techniques are founded.\", \"ans_5\": \"Yes, we agree that these are the already existing approaches, but we claim the novelty for combining them in a unique way so that the overall method can solve the issues of latency in the VLMs without higher loss in accuracy.\", \"ques_6\": \"The proposed early exit method was only tested on blip2, and other models were not tested, so the generalization abilty of the method cannot be confirmed. At the same time, the section 3.1 mentioned that the problem discussed in this paper is because Q-former generates image-grounded text embeddings. However, in the case that most VLMs do not use Q-former nowadays, is this method still applicable?\", \"ans_6\": \"We chose BLIP-2 for experiments in our method as BLIP-2 model gives us the flexibility to use any encoder and decoder. As our method applies to the decoder in VLMs and we had to show results on multiple decoder, we have used BLIP-2. Yes, the problem might be specific to BLIP-2 as it uses the frozen decoder that might be more prone to mid-crisis.\\nNote that our method can easily be extended to any VLM and could be appended to any of the VLM\\u2019s decoder as it does not have anything that is specific to the BLIP-2 model.\", \"ques_7\": \"The paper tested VQA and caption tasks, and blip2 also tested the retrieval task. How does the method in this paper performs on the retrieval task?\", \"ans_7\": \"Note that our method only applies to the decoder of the BLIP-2 model. Image text retrieval task does not require the decoder which in turn reduces to vanilla BLIP-2 inference. Hence, we have not added those results.\", \"ques_8\": \"The speedup in the article is calculated based on the number of parameters. Can it be calculated based on the actual inference time?\", \"ans_8\": \"Yes, it can be calculated using the actual inference time but as all other existing work show speedup metric only as it can be easily converted to multiple metrics such as expected time reduction rate, average number of layers required for a dataset etc so to be fair and consistent we used the speedup to access the increase in speed during inference.\\n\\n\\nWe hope that we clarified most of your doubts. If you any any further questions, please let us know, else, we request you to consider reassessing the scores.\"}",
"{\"title\": \"Reminder\", \"comment\": \"Dear reviewers,\\n\\nIt is a gentle reminder to acknowledge our rebuttal and make changes accordingly.\\n\\nRegards,\\n\\nAuthors\"}",
"{\"summary\": \"An EE strategy BLIPEE for VLMs to effectively mitigate inference latency by reducing unnecessary computations. BLIPEE emulate the behaviour of the final layer at the exits through adversarial learning. Experimental results demonstrate the effectiveness of this approach in enhancing accuracy and model robustness by mitigating overthinking and the phenomenon of mid-crisis that we highlight.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The key differences in this work are clear: 1) We employ adversarial training for efficient learning of EE models. 2) Our method can work under both semi-supervised and unsupervised setups by utilizing the zero-shot capabilities of the BLIP-2, while previous methods require a good amount of high-quality labeled training data, thus reducing size of training data.\", \"weaknesses\": \"1\\u3001Confusing symbols in fig1, why Classifier N, what\\u2019s meaning of D1/D2?\\n2\\u3001Missing some improtant references:\\n[1] NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks\\n[2] L. Qendro and C. Mascolo, \\\"Towards Adversarial Robustness with Early Exit Ensembles,\\\" 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, Scotland, United Kingdom, 2022, pp. 313-316, doi: 10.1109/EMBC48229.2022.9871347.\\n3. Unsupervised manner is not novel, just self-labeling, but their pseudo labels are not accurate enough in general.\\n4. Ablation study is not enough, whether or not the proposed adversarial training is necessary?\\n5. Technical contributions are not enough, no matter of adversarial training or knowledge distillation are very commonly-used skills, no more new techniques are founded.\", \"questions\": \"1. The proposed early exit method was only tested on blip2, and other models were not tested, so the generalization abilty of the method cannot be confirmed. At the same time, the section 3.1 mentioned that the problem discussed in this paper is because Q-former generates image-grounded text embeddings. However, in the case that most VLMs do not use Q-former nowadays, is this method still applicable?\\n2. The paper tested VQA and caption tasks, and blip2 also tested the retrieval task. How does the method in this paper performs on the retrieval task?\\n3. The speedup in the article is calculated based on the number of parameters. Can it be calculated based on the actual inference time?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"I concern that this presented method can only handle a small number of cases. I lower my rating from 6 to 5.\", \"comment\": \"About \\\"Our objective is to fasten the existing BLIP-2 VLM model\\\" in Ans-1, I think the objective is not enough for ICLR since the work is only used to fasten the specific model. At first, I thought this paper reconstructed BLIP-2 VLM model. Besides, I agree with Reviewer g7un: \\\"the novelty for combining them in a unique way\\\" is not enough for ICLR.\\n\\nAbout Ans-2, as shown in the tables, the speed increase is not amazing, which means that the early exit mechanism is not suitable for most cases. \\n\\nThus, I concern that this presented method can only handle a small number of cases. I lower my rating from 6 to 5.\"}",
"{\"title\": \"Reduction time\", \"comment\": \"Thanks for the reply\\n\\n| | True time | Expected time reduction | Speedup |\\n| ---------- | --------- | ----------------------- | ------- |\\n| Final exit | 660.2 | 1 | 1 |\\n| Ours | 297.8 | \\\\-42.8\\\\% | 1.75x |\\n\\nThis table shows the true time expected time reduction and speedup, please note that they are close to each other. Here the dataset is the coco dataset.\\n\\nWe are happy to take further questions, if not please consider reassessing the scores.\"}"
]
} |
|
ERBm5WK8nq | LeMoLE: LLM-enhanced Mixture of Linear Experts for Time Series Forecasting | [
"Lingzheng Zhang",
"Yimin Zheng",
"Lifeng Shen",
"Shiyuan Piao",
"Ziyue Li",
"Fugee Tsung"
] | Recent research has shown that large language models (LLMs) can be effectively used for real-world time series forecasting due to their strong natural language understanding capabilities. However, aligning time series into semantic spaces of LLMs comes with high computational costs and inference complexity, particularly for long-range time series generation. Building on recent advancements in using linear models for time series, this paper introduces an LLM-enhanced mixture of linear experts for precise and efficient time series forecasting. This approach involves developing a mixture of linear experts with multiple lookback lengths and a new multimodal fusion mechanism. The use of a mixture of linear experts is efficient due to its simplicity, while the multimodal fusion mechanism adaptively combines multiple linear experts based on the learned features of the text modality from pre-trained large language models. In experiments, we rethink the need to align time series to LLMs by existing time-series large language models and further discuss their efficiency and effectiveness in time series forecasting. Our experimental results show that the proposed LeMoLE model presents lower prediction errors and higher computational efficiency than existing LLM models. | [
"time series"
] | https://openreview.net/pdf?id=ERBm5WK8nq | https://openreview.net/forum?id=ERBm5WK8nq | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yfJ20wfM6O",
"qdC4OqOx7B",
"qRCmK8SfWA",
"eAYkKcL9X2",
"DyrOCbUFVa"
],
"note_type": [
"official_review",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1730518596529,
1732621559037,
1730692157552,
1729753860545,
1730600924911
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8663/Reviewer_d7sQ"
],
[
"ICLR.cc/2025/Conference/Submission8663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8663/Reviewer_vc42"
],
[
"ICLR.cc/2025/Conference/Submission8663/Reviewer_rnDU"
],
[
"ICLR.cc/2025/Conference/Submission8663/Reviewer_hK8o"
]
],
"structured_content_str": [
"{\"summary\": \"The authors present a mix-of-expert linear model guided by a large language model (LLM). Their experiments demonstrate that LeMoLE achieves superior performance with a reduced computational footprint compared to existing LLM-based approaches.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed methodology is clear and easy to comprehend.\", \"The ablation study focusing on the frequency domain and the number of experts offers valuable insights.\"], \"weaknesses\": [\"The novelty of the approach appears to be limited. The fundamental aspect of the model consists of a series of linear projections with varying input lengths. While the model retains a mixture of experts (MoE) architecture, it substitutes the aggregation component with a condition generated by an LLM.\", \"The utilization of text seems rather basic. For the dynamic aspect, it appears that only timestamps are considered, while for the static aspect, descriptions are provided for the channels.\"], \"minor\": \"Table 5, in caption the inference and training speed is in (s) but in the table it is marked as (ms).\", \"questions\": [\"It appears that LeMoLE performs better on the Traffic and Electricity datasets, which are significantly larger than the ETT dataset. Does this suggest that LeMoLE requires more data for effective training?\", \"The incorporation of multimodal information is promising; however, its effectiveness remains unclear. Specifically, using the timestamp alone as the dynamic input seems to provide similar information to the \\\"time encode\\\" employed in Autoformer and Fedformer, which translates timestamps into one-hot encodings. What advantages does using timestamp text as auxiliary input offer over this method?\", \"Additionally, the static information consists of channel descriptions, which should remain consistent across the training, validation, and test sets within the same dataset. This input may not contribute additional information and could limit generalization, serving primarily as a channel identifier.\", \"It is noted that LeMoLE exhibits a comparable number of parameters to MoLE at H=96, yet significantly exceeds it at H=720. Given that the MoE architecture is largely unchanged and the size of the GPT-2 encoder remains constant, could the authors clarify the source of the additional parameters?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper presents an LLM-enhanced mixture of linear experts framework designed for time-series forecasting with multiple lookback periods. The model effectively incorporates multimodal information by integrating both global and local textual data during the ensemble process of various linear experts. The proposed approach demonstrates both high efficiency and strong predictive capability in standard and few-shot forecasting scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper offers a new perspective for LLM-enhanced time-series forecasting models. Unlike existing methods that aim to align LLMs directly with time-series models, the proposed method integrates multimodal information to improve the ensemble of multiple linear experts.\\n\\n2. Experiments demonstrate promising results for both standard and few-shot forecasting scenarios, meanwhile showing the model's efficiency compared with most LLM-enhanced forecasting models.\", \"weaknesses\": \"1. Why do time series descriptions and timestamps convey non-linearity? What is the benefit of embedding timestamps using LLM compared to the time embedding method in MoLE?\\n\\n2. The reported values in Table 2 differ significantly from those in prior studies. It is important to include explanations about the experimental settings if the numbers were not directly adopted from existing literature.\\n\\n3. Why are exchange and weather datasets excluded from the evaluation since they both demonstrate non-stationarity?\\n\\n4. The framework lacks an automated approach for selecting the optimal number of experts for new evaluation datasets. Which column in Table 9 corresponds to Table 1? For a new dataset, does the process require manually testing the number of experts from 1 to 5 to determine the best results?\\n\\n5. There is a typo in Line 199: \\\"Based on Equation\\\".\", \"questions\": \"1. What are the main differences compared with MoLE when both static and dynamic prompts are removed from LeMoLE? The results of \\\"w/o both prompts\\\" in Table 4 still look quite different than MoLE.\\n\\n2. The caption for Table 4 states that the results are for a prediction length of 336. However, the results for the ETTh dataset in Table 4 appear to align with a prediction length of 96, not 336, in Table 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes an enhanced time series prediction method based on LLM named Lemole, which uses multiple lookback windows of different lengths to build mixture of linear experts. At the same time, the outputs by MOLE are adjusted based on the static knowledge and dynamic knowledge constructed by LLM.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper clearly describes the proposed method.\\n2. This paper is well organized and generally well written.\", \"weaknesses\": \"1. This paper is not innovative enough and the method proposed does not have many highlights compared to existing works. First of all, the mechanism based on multiple lookback windows of different lengths is not reasonable. In essence, modeling the learnable matrix $W_m$ of multiple different lookback window lengths is equivalent to learning the original linear layer parameter $W$. It can be considered that for a specific $W_m$, What it learns is the matrix $W_m^{'}$ = [$Zeros_m$, $W_m$], and $W$ can be obtained by directly combining these $W_m^{'}$ matrices. Secondly, the conditioning module is a combination of existing work. The static prompt comes from time-llm (ICLR 2024), and the dynamic prompt simply replaces the timestamp Embedding in MoLE with the output Embedding based on LLM, so there is nothing new in this module.\\n2. There are some problems with writing, and equation 3 is quoted incorrectly.\\n3. There are some doubts about the experimental results. First, the test dataset has few types. Secondly, taking the Electricity dataset as an example, the test results of this article are about twice the self-test results of PathTST (iclr 2023), and the test results of the Traffic data set are its about half, the test results of both etth1 and ettm1 datasets are not in the same order of magnitude as the test results of several existing works. It is not clear why this is the case.\\n4. The method proposed in this paper introduces the mixture experts based on the frequency domain is meaningless. The paper only explains why its effect is not good. If the mixture experts based on the time domain or frequency domain has its own adaptation scenarios, and analyzes it would make more sense, but based on the current description, I think his introduction is pointless.\\n5. In the ablation experiment on the number of experts, each data set has different sensitivity to the influence of this factor, which supports my point in Q1. At the same time, for the etth1 data set, as the number of experts increases, the effect becomes better . Why not continue to set more experts and explore the performance limits of this data set?\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"Linear models have proven effective in time series forecasting due to their capacity to capture and leverage the linear relationships inherent in many time series datasets. The challenge is to develop a powerful prediction model that retains the high efficiency of linear models. The authors develop an LLM-enhanced Mixture-of-Linear-Experts (MoLE) for time series forecasting. The authors mention that this is the first work on improving linear time series models based on mixture-of-expert learning and multimodal learning. Compared to several recent state-of-the-art prediction networks on long-term forecasting and few-shot tasks, this method demonstrates the effectiveness of this proposed LeMoLE in terms of accuracy and efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. By integrating LLM with linear experts, especially multiple linear experts on time series forecasting tasks, LeMoLE achieve a new sota compared to existing LLM-based time series forecasting models;\\n2. This study is the first work that uses linear time series models based on mixture-of-expert learning and multimodal learning.\", \"weaknesses\": \"This article looks incomplete.\\n\\n1. The novelty is not strong enough. This architecture roughly assembles LLMs into a linear layer, and there is no clear presentation of the used CNN structure. It would be better to theoretically discuss how and why CNN can overperform other architecture and to provide evidence on how a lightweight CNN can fuse the linear features.\\n2. Some typos and errors should be revised. For example, in Eq. 3, $b_{i}$ should be $b_{m}$. \\\"Based on Equation (??)\\\" should be revised. \\n3. The presentation sounds ambiguous in some sessions. For example, LeMoLE mentioned the use of varying window lengths, but in this article, there is no detail on the CNN when setting the range of $T$. Especially, when $T=96$ and the number of expert is $5$, the convolutional block should be $3 \\\\times 3$ or $7 \\\\times 7$? Or other settings?\\n4. A lack of visualization results. For example, comparisons between time-based and frequency-based LeMoLE outputs, or visualizations of how different window lengths affect the results.\\n5. Finally, the results are not consistent to the experiment settings. For example, the length of $T$ is not consistent to the $H$ of Table 2 and Table 3. Please double check if this is a typo.\", \"questions\": \"1. LeMoLE can process the global and local text data, can you please show some examples of how well LeMoLE can integrate the prompt text with time series data?\\n2. What is $H$ in Eq.1?\\n3. What is the architecture of CNN in Figure 2?\\n4. What is the limitation of this work?\\n5. What is the difference between LeMoLE-F and LeMoLE-T in processing the input features?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EQz0C5PSyR | Embedding Learning for Approximating Person-specific Cognitive Similarity | [
"Yujin Cha"
] | Metric learning is often applied in scenarios where labels are well-defined or where there is a ground truth for semantic similarity between data points. However, in expert domains such as medical data, where experts perceive features and similarities differently on an individual basis, modeling psychological embeddings at the individual level can be beneficial. Such embeddings can predict factors that influence behavior, such as individual uncertainty, and support personalized learning strategies. Despite this potential, the amount of person-specific behavioral data that can be collected through similarity behavior sampling is insufficient in most scenarios, making modeling individual cognitive embeddings challenging and underexplored. In this study, we proposed integrating supervised learning on small-scale similarity sampling data with unsupervised autoencoder-based manifold learning to approximate person-specific psychological embeddings with significantly improved similarity inference performance. We conducted a large-scale experiment with 121 clinical physicians, measured their cognitive similarities using medical image data, and implemented person-specific models. Our results demonstrate that even in complex expert domains, such as medical imaging, where cognitive similarity varies between individuals, person-specific psychological embeddings can be effectively approximated using limited behavioral data. | [
"Psychological embedding",
"Metric learning",
"Similarity",
"Cognitive representation",
"Autoencoder",
"Medical image"
] | Reject | https://openreview.net/pdf?id=EQz0C5PSyR | https://openreview.net/forum?id=EQz0C5PSyR | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xaLwTbL5oI",
"xG3ygWW6xq",
"umLlKEgfIH",
"u4NsnBef6Y",
"tWWmuwNPYG",
"sNCbFlMp0i",
"nipCXAS4cI",
"mdVN97BVxy",
"i0K7xpsAg6",
"hQUHvMEcZz",
"gR9byjbym8",
"fF8vv2YBZa",
"aXjsKWX8XV",
"WPaAwx8XyB",
"T7Hhv4KrSH",
"T6TOwFQw3z",
"Q97c8k8WLf",
"PGuqRTcxAx",
"P0rEKsrsvD",
"K9U4Sroyvu",
"GVYI7cCRcj",
"BiKWNSWMpX",
"B3wTbLXeNn",
"1KShIrWpso",
"0pABE1i7zt",
"0PoXwnqE04"
],
"note_type": [
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1730700783166,
1731914107577,
1730719729902,
1731916393876,
1731915436544,
1737523778934,
1732628188189,
1729848332417,
1730881870829,
1732514475602,
1731919943469,
1731918352719,
1732550436536,
1731918200321,
1731914983832,
1730714263209,
1732549600665,
1734565698266,
1731916036397,
1732533310797,
1732615164003,
1733302024969,
1733223657189,
1731919680231,
1732513996211,
1732866143660
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6599/Reviewer_yX4C"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Reviewer_k73t"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Reviewer_tueP"
],
[
"ICLR.cc/2025/Conference/Submission6599/Reviewer_xoWe"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Reviewer_gtAW"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Area_Chair_ESfG"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Reviewer_k73t"
],
[
"ICLR.cc/2025/Conference/Submission6599/Reviewer_xoWe"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Reviewer_yX4C"
],
[
"ICLR.cc/2025/Conference/Submission6599/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6599/Reviewer_yX4C"
],
[
"ICLR.cc/2025/Conference/Submission6599/Reviewer_gtAW"
]
],
"structured_content_str": [
"{\"summary\": \"This paper presents an investigation of using a small amount of similarity sampling data to fine-tune pretrained embeddings and learn person-specific embeddings. The authors evaluate this on a medical imaging task and perform an impressive scale evaluation with 121 clinical physicians. Person specific/ personalization analyses are interesting and valuable as there many tasks in which there are individual differences and performance can be improved significantly by adapting to a user.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper has several strengths, it is well written and clearly presented overall. The data collection is impressive and valuable to the research community. The paper generally has sufficient details for replication.\", \"weaknesses\": \"The authors claim theirs is \\u201cfirst large-scale experimental study to model individual-level psychological embeddings\\u201d This is a big claim and I don\\u2019t think it is really justified. There are many papers on personalization and subjective tasks such as emotion labeling in affective computing. I would request the authors to clarify what the novelty is or if I have misunderstood the approach.\\nRelated to this I think the paper should have a more extensive related work section on personalization that helps the reader understand the differences between this work and other adaptive/personalized models. \\n\\nThe details of the data collection are a little unclear. 1) How many images did each clinician label? Did they all complete 500 exactly? What was the range of times it took for them to complete these? This is a might seem a little like I am nitpicking, but the authors claim the data collection as one of their three main contributions and so it would be great to have a little more detail about the data collection over all, including (a) the background of the clinicians, (b) the average and range of number of years of experience, (c) more details about the instructions they were given. \\n\\nThe data collection is impressive and the study is interesting. I commend the authors on this. The results also support the performance claims showing a consistent bump in accuracy. This is not particularly surprising given that personalization usually leads to better results than a generic one. \\nWill the data be released? I apologies if I missed something but again the value of these data for future research could be significant. \\n\\nI did not quite understand the message behind Fig.4 (b) - Are they both highlighting that there is little correlation between the two? \\n\\nIn the introduction key contributions numbered points the text in parentheses e.g. \\u201cPerspectives on cognitive science\\u201d seem unnecessary. I would remove these.\\n\\nOverall, the paper is well written and motivates the work well. I think this has potential, however, it would be helpful if the work is positioned more clearly in the literature and the contributions contextualized within that.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for your valuable comments\", \"comment\": \"Thank you for your valuable comments. Your review comments have greatly contributed to the improvement of the paper, and we will incorporate the feedback into the revised version, which will be uploaded in the next few days.\\n\\n\\n1. Organization of the Paper and Figure 2 \\n\\nWe apologize for the shortcomings in the organization of the main text and the presentation of Figure 2. Regarding Figure 2, the intention was not to depict different modules but rather to illustrate information received at different time points (previous epoch) from the same module. We will make this clarification in the revised version.\\n\\nAdditionally, we recognize that certain parts of the paper\\u2019s organization are awkward, such as content that should belong in the Methods section being included in the Experiments section. These issues will be addressed in the revised version, and we kindly ask you to refer to the updated manuscript for these improvements.\\n\\n\\n2. Variable triplet loss\\n\\nThank you for your comments. Please refer to the explanation we have provided below. We will incorporate the key points from the explanation into the main text of the paper and include the detailed aspects in the appendix. \\nOur Variable Triplet Loss is not an innovative development but rather a practical modification of the traditional triplet loss proposed in previous studies. In conventional machine learning, metric learning typically learns a general metric that averages similarity judgments across many data points. In contrast, for high-complexity data (e.g., medical images), individual similarity patterns can vary significantly, meaning that metrics should be sampled on an individual basis. However, it is difficult to sample many data points from a single individual, and there is high uncertainty in the information obtained from these samples. To address this issue of individual similarity modeling, we modified the traditional loss function to enhance its practical utility. Below, we compare the traditional triplet loss with our variable Triplet Loss:\", \"traditional_triplet_loss\": \"$L(A, P, N) = \\\\max \\\\left( \\\\left| f(A) - \\\\hat{f(P)} \\\\right|^2 - \\\\left| f(A) - \\\\hat{f(N)} \\\\right|^2 +margin, 0 \\\\right)$\", \"our_triplet_loss\": \"$L(A, P, N) = \\\\max \\\\left( \\\\alpha \\\\left| f(A) - \\\\hat{f(P)} \\\\right|^2 - \\\\beta \\\\left| f(A) - \\\\hat{f(N)} \\\\right|^2, 0 \\\\right)$\\n\\nFor comparison, please note that the terms in our Equation (1) in the paper correspond to the traditional loss function as follows:\\n\\nC (closed) \\u2192 P (positive), D (distant) \\u2192 N (negative), A (anchor) \\u2192same, E() \\u2192 f().\", \"our_loss_function_has_two_main_advantages_for_learning_individual_embeddings\": \"(1) Absence of margin term:\\n\\nIn traditional triplet loss, there is an explicit margin term that forces the representation distance between positive samples to be closer than the distance between negative samples. However, our loss function does not use a margin but instead uses weighting terms \\ud835\\udefc and \\ud835\\udefd to encourage the representations of positive samples to be closer to the anchor than those of negative samples. However, unlike traditional loss, we do not enforce the representation distance between positive samples to be strictly closer than that of negative samples. This is significant because, unlike typical metric learning, where average trends are learned, individual similarity data for expert datasets is sparse and could reflect errors or outliers. For instance, when sampling similarity from 100 sets, 5-10 of these samples may not align with the individual's general tendency due to human error. In such cases, it is important not to force the model to learn outliers as part of the general tendency through a margin, but instead to ensure that data points that deviate from the overall trend are not included in the embedding model.\\n\\n\\n(2) Constant embedding for Positive and Negative samples during training:\\n\\nIn our triplet loss, the values of $\\\\hat{f(P)}$ and $\\\\hat{f(N)}$ are treated as constants calculated from the model of the previous epoch. In contrast, traditional triplet loss treats ${f(P)}$ and ${f(N)}$ as variables to be learned. Our setup is based on the observation that when triplet loss is combined with the reconstruction loss of an autoencoder, it is empirically more stable to treat $\\\\hat{f(P)}$ and $\\\\hat{f(N)}$ as constrants and only learn f(A).\\n\\n\\n3. Sensitivity analysis\\n\\nThank you for your excellent comment. We are currently performing sensitivity analysis and will include the results in the appendix of the revised version.\\n\\n\\n4. Statistical validation of similarity pattern diversity\\n\\nWe emphasize the diversity of similarity patterns among subjects and will conduct a statistical test to validate the randomness of this distribution. The results will be included in the revised version.\\n\\n\\nWe will inform you of the incorporated changes once the revised version is uploaded.\"}",
"{\"summary\": \"This paper addresses the challenge of modeling individual cognitive embeddings in expert domains, like medical data, where perceptions of features and similarities vary significantly among individuals. It proposes a novel approach that combines supervised learning on limited similarity sampling data with unsupervised autoencoder-based manifold learning to enhance the accuracy of person-specific psychological embeddings. The results from a large-scale study involving clinical physicians show that even with limited behavioral data, the proposed method effectively approximates these embeddings and improves similarity inference performance in complex domains.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method intriguingly combines supervised learning on small-scale similarity sampling data with unsupervised autoencoder-based manifold learning to approximate person-specific psychological embeddings.\\n\\n2. The authors conducted a comprehensive experiment involving 121 clinical physicians, measuring their cognitive similarities using medical image data, which lends credibility to the results.\", \"weaknesses\": \"1. The paper's organization is unclear, making it difficult to follow. For instance, Figure 2 lacks sufficient detail and explanation regarding how the various modules interact with one another.\\n 2. Figure 1 and Equation 1, which illustrate the person-specific cognitive embedding modeling framework and the variable triplet loss function, lack detailed explanation. The authors should clarify how the proposed variable triplet loss function (Equation 1) innovatively captures individual cognitive similarity compared to standard triplet loss functions. A comparative analysis with traditional triplet loss in terms of mathematical formulation and expected outcomes would be beneficial.\\n3. The authors should conduct a sensitivity analysis on $\\\\alpha$ and $\\\\beta$ to demonstrate the robustness of the model concerning these critical parameters.\\n4. Figure 3 and Section 4.3 present the group-based similarity pattern analysis results. While the t-SNE visualizations illustrate the variability in similarity patterns among subjects, the paper lacks a statistical test to quantify the significance of these differences. The authors should include statistical validation, such as ANOVA or post-hoc tests, to confirm the variability of cognitive similarity patterns across different subjects. Additionally, the paper should discuss how these findings may generalize beyond the specific group of clinical physicians studied.\", \"questions\": \"The authors are required to address all my concerns carefully listed in the Weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"We deeply appreciate your constructive review comments (Response 2)\", \"comment\": \"3. Applicability to Other Modalities such as CT and MRI (Question 2)\\n\\nIn our experiment, we did not introduce any special settings that are unique to Chest X-ray (CXR). For example, as we mentioned earlier, clinical information that doctors typically consider when interpreting CXR was not provided. Therefore, we believe that this approach can be applied to other modalities, such as MRI or CT, and could also be generally applicable to other non-medical image datasets. However, CT and MRI differ from X-ray images in that they have many more dimensions and are in voxel-based 3D data form. When interpreting CT or MRI scans, doctors do not consider the entire 2D tensor as with CXR, but instead focus on specific 3D lesions. Therefore, applying our framework to CT or MRI would be more practical if we consider the similarity of lesion-centered partial images rather than the entire image. Additionally, the quality and emphasis of the image can vary significantly depending on the image synthesis parameters, which can introduce strong biases in determining similarity. Therefore, controlling these parameters would need to be considered. Furthermore, unlike Chest X-rays, CT and MRI scans are often handled by physicians with different specialties, and some may not handle them at all. Thus, it would be important to control for subjects' specific specialties in experimental validation.\\n\\n\\n4. Decision to use CNN instead of Transformer and the design of our network architecture (Weakness 2 & Question 3).\\n\\nThank you for the important question. We would like to emphasize two aspects regarding our decision to use CNN instead of Transformer.\\n\\n(1) First, unlike traditional metric learning, the amount of similarity information available for individual metric learning is very limited. In our experiment, each participant was sampled 500 times, and even with this small amount of data, it took more than 5 hours to process. To fit such small-scale sampling data, a relatively lightweight model architecture would be more suitable. Our goal was not to find the optimal architecture but to demonstrate the effect of combining unsupervised and supervised learning for individual metric learning. Therefore, we performed the analysis using the simplest architecture suitable for small datasets. In our future work, we plan to conduct performance comparison studies using various foundational models, such as transformers, to identify the optimal architecture.\\n\\n(2) Second, there is significant evidence that CNN representations overlap highly with human visual perceptual characteristics. (Jha, A., Peterson, J. C., & Griffiths, T. L. (2023). Extracting low\\u2010dimensional psychological representations from convolutional neural networks. Cognitive science, 47(1), e13226; Lindsay, Grace W. \\\"Convolutional neural networks as a model of the visual system: Past, present, and future.\\\" Journal of cognitive neuroscience 33.10 (2021): 2017-2031.) There is also evidence that CNNs can model human visual information processing at an abstract level (Kubilius, Jonas, et al. \\\"Brain-like object recognition with high-performing shallow recurrent ANNs.\\\" Advances in neural information processing systems 32 (2019)). While transformers are known to be effective in capturing global context in image processing tasks, there was insufficient evidence to suggest that they could capture human like cognitive features effectively.\\n\\n\\nAfter deciding to use CNN as our base model, we gave considerable thought to the detailed architecture. We were inspired by the Cornet model (Kubilius, Jonas, et al. \\\"Brain-like object recognition with high-performing shallow recurrent ANNs.\\\" Advances in neural information processing systems 32 (2019)), which mimics the human visual information processing system. So we designed an encoder-decoder structure consisting of four layers. The specific parameters for each layer were determined empirically through several preliminary experiments to find near-optimal values.\\n\\n\\n\\n5. Lack of comparison with alternative embedding models as baselines (Weakness 3)\\n\\nIn the ablation study presented in Table 1 of the paper, we conducted a comparative experiment by removing the triplet loss and reconstruction loss individually. It is noteworthy that even without additional external information (such as human behavior data or labels), the combination of triplet loss and reconstruction loss resulted in synergistic performance improvements. The unsupervised learning introduced by the reconstruction loss typically involves the model finding features on its own; however, when combined with the triplet loss, it optimizes towards discovering cognitive features that differ from person to person.\\n\\nThat said, we acknowledge the comment regarding the lack of baseline comparison models and will perform an analysis on a model that combines only the encoder with triplet loss. The results will be included in the upcoming revised version.\"}",
"{\"title\": \"We deeply appreciate the constructive questions and comments (Response 2)\", \"comment\": \"3. The utility of person-specific embeddings in the context of machine learning frameworks (Question 1)\\n\\nFirst, one area where our framework can be applied in the context of existing machine learning is in active learning scenarios. In active learning, the problem of selecting unlabeled data for querying an oracle is cost-dependent. Therefore, if possible, querying the oracle about a dataset it deems dissimilar could be a strategy to extract diverse information within the cost constraints. By applying our idea, we can identify data distributions that the oracle deems similar, thus improving the efficiency of active learning.\\n\\nSecond, in practical applications like Reinforcement Learning with Human Feedback (RLHF) for large language model (LLM) development, challenges arise when it is difficult to standardize or normalize human feedback. Our framework could be used to categorize and normalize feedback based on individual similarity patterns, providing a strategy to address this issue.\\n\\n\\n4. Scale with an increasing number of individuals and data points (Question 3)\\n\\nModeling is performed independently for each individual, so as the number of individuals increases, the computational load increases proportionally. Each model is independent, and the performance of the model is defined at the individual model level, so there is no direct correlation between the number of individuals and performance. (We implemented 121 individual models for 121 subjects, and each model showed an average prediction accuracy of 68%.)\\n\\nAn increase in the number of sampling data is expected to significantly contribute to overall performance improvement. The evidence for this is presented in Fig. 6(b). In simulations conducted with a number of data equivalent to human behavior experiments, we achieved prediction performance comparable to or slightly better than that of the human behavior experiments. However, as the number of sampling data increases, the simulation performance also increases proportionally. Despite this, increasing the number of samples to model individual similarity is practically difficult, so it will be more important to improve sampling efficiency or adopt approaches to enhance model performance (such as methods to reduce uncertainty).\\n\\n\\n5.\\tHow can the insights from this study be used to develop personalized learning strategies for experts or improve human-AI collaboration? (Question 4)\\n\\nIn the field of human-AI collaboration, the most challenging aspect is handling human uncertainty. Uncertainty is subjective but can be considered a type of function, and there is considerable evidence that human uncertainty can be modeled using machine learning (Yujin Cha and Sang Wan Lee. Human uncertainty inference via deterministic ensemble neural networks. In 35th AAAI Conference on Artificial Intelligence/33rd Conference on Innovative Applications of Artificial Intelligence/11th Symposium on Educational Advances in Artificial Intelligence, pp. 5877\\u20135886. ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE, 2021.) \\n\\nIn other words, it is theoretically possible to model uncertainty by sampling it from individuals for specific data. However, the practical challenge in uncertainty modeling arises from the difficulty in approximating an individual\\u2019s abstracted representation space. For example, if a specific individual perceives certain data as similar, it is likely that their uncertainty about those data points will also be similar. This allows us to approximate uncertainty about a wide range of abstract data representation defined in the individual\\u2019s cognitive space using limited uncertainty sampling information. In doing so, we could assist in precision learning for humans using AI and apply it to classify cases where AI should query the human or delegate judgment.\\n\\nMoreover, if we could approximate an expert's cognitive embedding, we could use the inferred pattern to recommend specialists who can handle specific situations well. By quantifying the degree of similarity in perceptions of specific diseases, we could identify a highly specialized physician who knows a particular condition thoroughly.\\n\\nFrom a machine learning perspective, if there is a highly capable human expert, we might gain inspiration for implementing high-performance machine learning models by reconstructing the expert's embedding.\\n\\nIf you have any additional questions or unresolved concerns, please let us know, and we are ready to provide detailed answers!\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thank you for your comments.\\nWe hope the following response addresses your concerns.\\n\\nFirst, we also recognize the importance of exploring the potential generalization of our framework across diverse domains. As this study is grounded in human behavior experiments, external validation using benchmark datasets requires additional time and resources. Given the challenges of promptly recruiting participants and conducting experiments, we plan to address this aspect in future research.\\nWhile we fully appreciate the reviewer\\u2019s perspective, we respectfully request your understanding that, in machine learning research involving human behavior experiments, studies utilizing general benchmark datasets and those leveraging specialized datasets each hold unique significance. In this context, we believe that studies like ours, which employ only specialized data, can also contribute meaningfully to the fields of machine learning and cognitive science.\\nIt is worth noting that human behavior experiment research involves substantial costs and time, beyond those required for the modeling process itself. Consequently, conducting experimental research that simultaneously uses both benchmark and specialized datasets is highly challenging and, to the best of our knowledge, unprecedented. We hope that recognizing the independent contributions of study conducted with either general or specialized datasets will support and encourage the development of high-risk (and resource-intensive) human behavior experiment-based study in the future.\\n\\n\\nSecond, in traditional machine learning frameworks, active learning assumes a perfect oracle, with oracle annotations considered error-free. However, in real-world active learning scenarios, the oracle is noisy; that is, human experts acting as oracles may have uncertainty, leading to potential annotation errors. Traditional active learning considers uncertainty solely from the model\\u2019s perspective, but to optimize model performance under limited budgets, it is advantageous to query data points with high uncertainty from the model\\u2019s perspective but low uncertainty from the oracle\\u2019s perspective. Unlike the model, it is difficult to measure or estimate the oracle\\u2019s uncertainty for all candidate data points. Therefore, a practical approach is to measure the oracle\\u2019s uncertainty for a subset of samples and then model this uncertainty. Just as the model\\u2019s uncertainty is a function in the representation space (embedding) rather than the original high-dimensional space, modeling human oracle uncertainty requires understanding the psychological embedding that represents the human oracle\\u2019s similarity. In this context, similar data points in the psychological embedding space share similar uncertainty. Since individuals exhibit different similarity patterns, their psychological representation spaces\\u2014and consequently their uncertainty models\\u2014also differ.\\n\\nIn summary, in active learning, it is advantageous for the model to query data points with low oracle uncertainty. Since measuring oracle uncertainty for all data points is not feasible, modeling is necessary. This, in turn, requires oracle uncertainty measurements for sample data and a psychological embedding model. Therefore, reconstructing a psychological embedding space that reflects person-specific similarity patterns can enhance the accuracy and efficiency of human-in-the-loop processes such as active learning.\\n\\nPlease let us know any additional comments or questions. \\nThank you.\"}",
"{\"summary\": \"The paper presents an approach for learning a human-like representational space using a deep learning architecture. This embedding space is supervised by human similarity judgments, specifically, for triplets of chest X-ray images. The deep learning architecture is an autoencoder, and the representation learning is implemented using a triplet loss function, whose purpose is to produce an embedding space where objects (images) perceived as more similar by humans are positioned closer together. The model successfully generalizes to predict human similarity judgments.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The main contribution of this paper is its demonstration that it is possible to learn person-specific embeddings, as seen in good out-of-sample prediction of similarity relations at the individual level. Additionally, the findings suggest that person-specific embeddings effectively capture unique, idiosyncratic aspects of each person's similarity judgments. This is seen in the fact that models learned for one participant do not perform well in predicting similarity judgments of other participants.\", \"weaknesses\": \"1) Novelty of embedding workflow: The novelty of the paper is somewhat limited, because several studies have already presented related methods for learning embedding spaces that align object distances with human similarity judgments, or related tasks like the triplet task (where a person selects which of two objects is more similar to a reference object). Palazzo et al. (2020) use a triplet loss to learn a joint embedding of EEG data and visual images; Zhang et al. (2018) present LPIPS which reweights layers of a pre-existing network based on human triplet judgments; Tarigopula et al. (2023) use pruning to improve the alignment between object-distances in the embedding space and human similarity judgments; Jha et al. (2023) extract low-dimensional representations from pre-trained CNNs using similarity learning in a lower-rank embedding space so that distances in the embedding space maintain monotonic relation with human similarity judgments. Given these prior studies, the main differences in the present study are the use of an autoencoder and learning/modeling single-participant data. However, even these differences are associated with some weaknesses as detailed below.\\n\\n2) Some architecture and implementation details are not well explained or justified. In particular, it is not clear why the authors choose to use an autoencoder architecture rather than a simpler encoder operating on features produced from a separate feature extractor. In principle, it should be possible to learn an embedding space without the decoder (see Jha et al, Palazzo et al.). That would produce a smaller, less complex model. Looking at the details of the ablation study it is clear that the addition of the decoder improves performance, as compared to using the triplet loss alone, and the reason is probably that the decoder is necessary for learning a rich set of visual features. An alternative (used by prior studies) is to use a pretrained CNN as a feature extractor, and then rely only on an encoder alone for metric learning. A potential weakness in using the autoencoder is that the object-distances end up relying on features that are also required to reconstructing the image, but it is not clear whether those are necessarily the most important drivers of human similarity judgments. Another reason for using a pre-trained network as a feature extractor is suggested from the ablation exercise presented in Table 1. It shows that using a reconstruction (decoding) loss alone produces above-chance performance in predicting human judgments. This suggests that certain image features, captured by the decoder, drive similarity judgments across the group. This strengthens the argument for using a strong pretrained architecture as feature extractor, which could then be fine-tuned on subject-specific data.\\n\\n3) Absence of justification for modeling individual data: The abstract and introduction write that \\u201cmodeling psychological embeddings at the individual level can be beneficial,\\u201d but the authors do not provide a clear demonstration of how single-participant modeling improves a specific objective or task. For certain applications, such as identifying individuals who may differ from others, it is not even necessary to use an embedding space; such analyses can be conducted directly from the similarity matrices computed from the behavioral data.\\n\\n4) Novelty and strength of interpretability analysis (section 4.6): the authors introduce an interpretability analysis whose purpose is to identify important parts of the images. It has a few weaknesses. First, the details are not presented in a separate methods section but introduced on the fly in the results. Second, the analysis choices are not argued for. The indicator of quality to be explained is the variance in a single reference unit of the network, which presents the strongest variance for a batch of images. They then define more important image pixels as those whose masking reduces the variance in this unit. Both Palazzo et al. and Tarigopula et al. report related masking procedures, but in those studies, the impact of masking was evaluated by determining how the masking of each pixel (or image region) impacts the alignment between the DNN and human similarity spaces, which is a more direct test of which image areas are psychologically relevant than the test evaluated here. As a consequence, the novelty and validity of the masking procedure suggested here is weakened. \\nSeparately from this issue, a formal quantitative overlap between human raters and the interpretability measure is missing; only a qualitative evaluation is provided via a figure.\", \"questions\": \"it was not clear to me why the loss term called \\u2018variable triplet loss\\u2019 was used. The traditional triplet loss forces a solution where D(a,c) < [D(a,d)+margin] where D is distance, a the anchor and c,d the similar and dissimilar objects. To my understanding, the loss term used (stronger weight on distance to closer anchor) will encourage D(a,c) < D(a,d) but does not force it. That is, there could be solutions where this does not hold. The choice of this term should be better motivated.\\n\\nAdditional feedback and references mentioned\", \"re\": \"Figure 3 \\u2013 The figure shows how participants are positioned in a lower-dimensional space. To interpret these distances, it would be good to include a test-retest measure for each participant, which would formally quantify intra-participant variability, not just inter-participant variability.\\np. 1 Re\\u2019 the statement that \\u201cthe amount of person-specific behavioral data that can be collected through similarity behavior sampling is insufficient in most scenarios\\u201d, and similar statements in Section 2.1: There are effective multi-item arrangement methods, similar to the procedure used here that allow estimating object similarity in multiple dimensions (Kriegeskorte and Mur, 2012). \\np.2 Re\\u2019 the statement \\u201cwe conducted a first-ever behavioral sampling experiment to measure the cognitive similarity of actual CXR images with 121 clinical physicians, focusing on realistic scenarios.\\u201d This seems to be an important point, but it was not clear what does the similarity of medical images measure? If these judgments are independent of diagnosis (as appears to be the case here), the dimensions that drive similarity might be completely unconstrained and left to each person\\u2019s own interpretation. This means that it\\u2019s possible that two physicians could make very different similarity judgments even if they arrive at the same diagnosis. It would be interesting to know whether these similarity judgments correlate with agreement on diagnosis.\\nP. 4 The authors mention a limitation of person-specific modeling, writing, \\u201cbehavioral data from a subject can typically only be used to train an individual model for that subject.\\u201d The word \\u2018only\\u2019 was unclear; the judgments could be averaged to create a group -level similarity matrix if the triplets are same across participants.\\np. 5 It\\u2019s not completely clear how the binary labeling was applied to a triplet so that concatenation produced SPV.\\np. 6 The participants were 121 clinical physicians. They seem to vary widely over age/experience (Appendix; Table 2 Min = 26; Max = 55 years of age). It\\u2019s probable they differ considerably in their ability to evaluate chest X-ray images. It could be interesting to see whether the embeddings or behavior are more similar among the more experienced participants.\\np. 7 The method for applying t-SNE to binary strings of SPV is unclear. Binary data require specific distance functions, and those details are missing here.\\np. 7 section 4.5 clunky writing around the text in parentheses. \\n\\nRefs\\nPalazzo, S., Spampinato, C., Kavasidis, I., Giordano, D., Schmidt, J., & Shah, M. (2020). Decoding brain representations by multimodal learning of neural activity and visual features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11), 3833-3849.\\nZhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586-595).\\nJha, A., Peterson, J. C., & Griffiths, T. L. (2023). Extracting low\\u2010dimensional psychological representations from convolutional neural networks. Cognitive science, 47(1), e13226.\\nTarigopula, P., Fairhall, S. L., Bavaresco, A., Truong, N., & Hasson, U. (2023). Improved prediction of behavioral and neural similarity spaces using pruned DNNs. Neural Networks, 168, 89-104.\\nKriegeskorte, N., & Mur, M. (2012). Inverse MDS: Inferring dissimilarity structure from multiple item arrangements. Frontiers in psychology, 3, 245.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper explores approximating person-specific cognitive embeddings in expert domains, where similarity perceptions vary between individuals. The authors combine supervised learning on limited similarity data with unsupervised autoencoder-based manifold learning. An experiment with clinical physicians and medical images demonstrates the feasibility of this approach. The paper contributes a new method for modeling individual-level psychological embeddings, showing the potential of autoencoders in this context, and validating the use of variable triplet loss.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper tackles a unique problem: approximating person-specific cognitive embeddings, particularly in domains with high inter-observer variability like medical image interpretation. This approach is a novel application of metric learning.\", \"The study includes an experiment with 121 clinical physicians, using real medical image data. This provides empirical support to their claims.\", \"The paper clearly outlines the methodology, including the triangular measurement framework for collecting behavioral data and the integration of supervised and unsupervised learning for embedding modeling.\"], \"weaknesses\": [\"The core focus of the study leans more towards cognitive science and human-computer interaction, with limited novelty in terms of machine learning techniques. The use of standard autoencoder architectures and the absence of new metric learning algorithms may lessen its impact on the machine learning community.\", \"The study primarily focuses on medical image interpretation with limited exploration of the generalizability of the proposed approach to other domains. Further experiments on diverse datasets and tasks would strengthen the paper's contribution.\", \"The paper lacks a thorough analysis of the proposed method, especially concerning the convergence properties of the loss function and the interaction between triplet loss and manifold learning in autoencoders.\"], \"questions\": [\"Could the problem of approximating person-specific embeddings be framed in the context of existing machine learning challenges like label noise or annotator disagreement? This could help position the work within a more familiar framework for the ML audience.\", \"Have you considered evaluating the approach on publicly available benchmark datasets for label noise or annotator disagreement? This would provide a point of comparison with existing methods and offer insights into the generalizability of your findings.\", \"How does the proposed approach scale with an increasing number of individuals and data points? Are there any considerations for improving the computational efficiency of the method?\", \"How can the insights from this study be used to develop personalized learning strategies for experts or improve human-AI collaboration in domains with high inter-observer variability?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for your understanding\", \"comment\": \"We are currently conducting additional analyses, which has delayed the upload of the revised version of the paper. We will notify you as soon as it is uploaded. We deeply appreciate your patience and consideration.\"}",
"{\"title\": \"We deeply appreciate the thorough review comments (Response 2)\", \"comment\": \"5. Triplet Loss\\n\\nAs the reviewer pointed out, traditional triplet loss enforces a margin with the condition D(a,c) < [D(a,d) + margin]. However, our triplet loss does not explicitly consider a margin. We would like to clarify that this choice is intentional. Unlike traditional metric learning, which learns an average metric for the general population, the person-specific similarity sampling data for expert data is limited and can reflect considerable uncertainty. For instance, when sampling similarity from 100 sets, 5-10 of those samples may deviate from the true similarity trend of the subject. Instead of forcing the learning of similarity from noisy data that arises from human behavior sampling via a margin, our goal is to prevent the similarity of data that deviates from the trend from being incorporated into the model.\\n\\n\\n6. Figure 3 (test-retest analysis)\\n\\nBehavior sampling for each subject was conducted 500 times, divided across intervals of more than one day. While it is challenging to perform a rigorous test-retest analysis under the given constraints, we will reanalyze the divided datasets collected at different time intervals. The observation of similar trends in the two behavior datasets sampled with time intervals will be incorporated into the appendix materials of the revised version.\\n\\n\\n7. Relationship between diagnosis and similarity\\n\\nWe agree with the reviewer\\u2019s opinion and find this to be a very intriguing topic. In fact, we conducted separate medical imaging diagnostic tests for all subjects apart from the similarity measurement experiments. However, we refrained from analyzing the diagnostic results, as we were concerned that it might dilute the focus of this study on embedding modeling. Nevertheless, we have accepted the reviewer\\u2019s suggestion and performed a brief analysis of the relationship between diagnostic ability and similarity perception patterns. This analysis will be incorporated into the revised edition, which will be uploaded in a few days.\", \"to_summarize_the_results\": \"- In the CXR-A group, the similarity pattern vectors of physicians with superior diagnostic abilities tended to cluster closer together, but no distinct clusters were formed.\\n- In the CXR-B group, however, the similarity pattern vectors of highly skilled diagnosticians showed a tendency to form distinct clusters.\\nThis supports our hypothesis that CXR-B involves a higher tendency for similarity perception centered around active lesions.\\n\\n\\nWe believe this topic could be an extremely important independent research subject. Therefore, we plan to address this as an independent focus in future work, reflecting the reviewer\\u2019s excellent feedback.\\n\\n\\nAdditionally, while diagnostic ability varies by a physician\\u2019s age or experience, we do not believe that diagnostic ability simply correlates with age or years of clinical experience. Rather, diagnostic ability is likely proportional to the amount of time actively spent performing diagnostic tasks. Therefore, a detailed investigation into precise clinical experience may be necessary.\\n\\n\\n\\n8. Binary Strings of SPV\\n\\n\\nThe binary strings of SPV (Similarity Pattern Vector) function as a type of one-hot vector, where each similarity measurement task is substituted as a single dimension. For example, if there is one anchor and two comparison images, there are two possible similarity outcomes, which can be represented as either 0 or 1. If there are 500 such task sets, a 500-dimensional SPV can be defined.\\n\\nWhile this one-hot vector is very simple, it lacks weights between dimensions, meaning it can straightforwardly represent the similarity of similarity patterns (a form of meta-similarity, as we conceptualize it) using basic Euclidean distances. For the same reason, reducing the dimensionality of this one-hot vector and visualizing it with t-SNE does not pose any technical flaws.\\n\\n\\n9. Similar statements in Section 2.1 and other presentation issues\\n\\nThank you for the suggestion. The similar statements in Section 2.1, awkward wording in Section 4.5, and issues with the use of \\\"only\\\" will be revised and reflected in the updated version.\"}",
"{\"title\": \"Thank you for your constructive comments (Response 2)\", \"comment\": \"3. Consistency of results / Data release\\n\\n\\nWe deeply appreciate your recognition of our potential and efforts. While it may seem obvious that personalized modeling yields better results than general metric modeling, we would like to emphasize once again that achieving meaningful personalized modeling outcomes is quite challenging, given the limited amount of behavioral data that can be obtained for each individual. What we find particularly remarkable is that, despite the lack of additional sampled information for each individual, the model enhanced with an autoencoder appears to amplify the limited individual data effectively.\\n\\nThe data will, of course, be made available through ICLR 2025. We believe that collecting and sharing high-cost data, with expert subjects, is a significant contribution to the machine learning community. If this contribution is recognized, we hope it will encourage other institutions to collect and share similar high-cost data, ultimately fostering study that makes use of such relevant data.\\n\\n\\n\\n4. Figure 4(b)\\n\\n\\nWe apologize for the insufficient explanation. In Fig. 4 (b), the diagonal represents the results of testing a specific subject\\u2019s model on that subject\\u2019s test data. These values correspond to the light blue (SP) bars in Fig. 4 (a). The blue bars (NSP) in Fig. 4 (a) represent the average of the results from testing a specific subject\\u2019s model on the test data of other subjects, excluding that particular subject. The values outside the diagonal in Fig. 4 (b) represent the results of for each test data point from other subjects, rather than the average.\\nAccording to our definition, the average prediction performance on test data from other subjects is the NSP. Therefore, the average of all the values in each row of Fig. 4 (b), excluding the values above the diagonal, represents the NSP for each model. \\n(Please let me know if my explanation is unclear or insufficient.)\\n\\nNote that the image sets used for modeling and the image sets used for performance testing of the trained models are the same for all subjects within the group. Since modeling is done independently for each individual, the model trained on a specific subject\\u2019s data should perform well on that subject\\u2019s test data but should show lower performance on test data from other subjects.\\n\\n\\n5. Unnecessary text\\n\\nI agree with the reviewer\\u2019s comment. We will incorporate the reviewer\\u2019s suggestions in the revised version to be uploaded in the next few days, removing or modifying the relevant text.\"}",
"{\"comment\": \"Dear Reviewer,\\n\\nI am pleased to inform you that the updated version of our manuscript has been uploaded. The new version incorporates your valuable feedback, and the following revisions have been made:\\n\\n1. Unclear Organization:\\nSection 2 on Related Work has been reorganized for better clarity, and Section 2.2 has been strengthened to provide a smoother transition into the discussion of personalized embedding modeling. In Section 4, the qualitative evaluation methods described have been moved to Section 3.5 to more clearly separate the methods and experimental results. Additionally, Figure 2 now clearly indicates that the differences represent time steps of the same module, rather than different modules, and unnecessary details that could have caused confusion have been removed.\\n\\n2. Hyperparameter Sensitivity Analysis:\\nThe results of the sensitivity analysis have been updated in Appendix (A.7).\\n\\n3. Statistical Analysis of Similarity Pattern Diversity:\\nIn Line 376, we have included the statistical analysis of the diversity of similarity patterns (multivariate runs test) along with the corresponding p-value.\\n\\nPlease note that further updates to the manuscript may be made before the final deadline.\\n\\nKind regards,\"}",
"{\"title\": \"Thank you for your constructive comments (Response 1)\", \"comment\": \"Thank you for your constructive comments. I sincerely appreciate your acknowledgment of the potential of our work and your recognition of the value of the data. I hope the responses below address your concerns effectively.\\n\\n\\n1. Novelty of our work compared to previous study\\n\\nWe have carefully reviewed the reviewer\\u2019s comments and acknowledge the possibility that our claims may be overly assertive. We will reflect this in the revised version, which will be uploaded in the coming days. \\nSeparately, we would like to highlight several aspects of our study that distinguish it from previous studies:\\n\\n(1) Use of complex expert data for individual embedding modeling:\\n\\nOur work attempts to model individual embeddings of experts (non-radiologist physicians) using complex, practical expert data, namely medical imaging (chest X-rays). To the best of our knowledge, no studies have explored individual embedding modeling using real-world (expert) data. It is well established that the dimensionality required to determine similarity for general image data is relatively low (e.g., Jha, A., Peterson, J. C., & Griffiths, T. L. (2023). Extracting low\\u2010dimensional psychological representations from convolutional neural networks. Cognitive Science, 47(1), e13226). For general datasets with clear labels, individuals can perceive similarities based on distinct attributes, resulting in minimal variation in similarity patterns among individuals. For example, when comparing images of a dog, a cat, and a snake, most people would consider the dog and cat to be more similar.\\n\\nIn contrast, expert data involves significantly higher dimensionality in determining similarity. For instance, in the domain of chest X-rays, even if two physicians arrive at the same diagnosis, the pathways and patterns they use to reach that conclusion can vary greatly. Demonstrating that individual metric learning is feasible with such complex expert data and building evidence to support this capability represents a meaningful contribution to the metric learning community, separate from studies using general datasets.\\n\\n\\n(2) Demonstrating the utility of unsupervised learning (Autoencoders) in person-specific metric learning:\\n\\nIn addition to showing the feasibility of individual metric learning with expert data, our work highlights the novelty of addressing practical challenges in this domain through unsupervised learning (autoencoders). In domains with complex data, such as medical imaging, where individual similarity perception patterns vary, the amount of similarity information that can be sampled (measured) from a person is extremely limited.\\n\\nFor instance, in our experiment, collecting 500 samples per individual required over 5 hours on average. Considering fatigue and the constraints of experts' working hours, continuous measurement is challenging, requiring significant time and effort overall. Despite this, the 500 samples collected are insufficient for robust modeling given the complexity of the image domain. Conventional modeling approaches struggle to achieve meaningful predictive performance with such limited data.\\n\\nTo address this, we integrated unsupervised feature extraction through autoencoders with traditional methods. There are no prior examples in metric learning where unsupervised learning was combined to improve performance. Autoencoders do not require additional information to train but can flexibly learn the manifold structure in an unsupervised manner. (It can be explained that the autoencoder amplifies the individual similarity information obtained from a small number of samples) Our experiments demonstrated that this capability can be harnessed to extract characteristics specific to individual human learners, which we believe is a novel contribution.\\n\\nIn response to the reviewer\\u2019s comments, we will revise the claim of being the \\\"first large-scale experimental study\\\" to limit it to the \\\"expert domain\\\" and strengthen the Related Works section in the revised version.\\n\\n\\n2. Details of data collection\\n\\nIn each set, three images were presented, and clinical physician participants evaluated the similarity by comparing the three images. Each participant completed a total of 500 sets without exception. Participants who withdrew were excluded from the analysis. The total time spent, as clearly mentioned on lines 333-334 of this paper, was an average of 304 minutes for the CXR-A group and 245 minutes for the CXR-B group. Details of the clinical physicians' background, age, and other information related to the data collection process are provided in Appendix 1 at the end of the paper. Further detailed information about the participants' data is also provided in the supplementary file.\"}",
"{\"title\": \"We deeply appreciate the constructive questions and comments (Response 1)\", \"comment\": \"We deeply appreciate the constructive questions and comments from the reviewer. Your feedback has been invaluable in helping us set a clear direction for improving our work. We hope the following responses effectively address your concerns.\\n\\n\\n1. Limited novelty in terms of machine learning (Weakness 1 & 3)\\n\\nWe agree with the reviewer\\u2019s comment regarding the perceived lack of novelty in our machine learning methodology. However, when submitting this paper, we clearly selected the \\\"applications to neuroscience & cognitive science\\\" field from the official topics for ICLR 2025 (Reference: https://iclr.cc/Conferences/2025/CallForPapers). Since the application of machine learning to cognitive science is an official topic for ICLR 2025, we believe that the lack of novel machine learning methodologies in this paper should not be considered a weakness. We wish to emphasize that our work is a novel application rather than a theoretical contribution to machine learning methods. There is a significant gap in the research on applying metric learning methods to individual metrics, and we are presenting a pioneering case for modeling individual metrics on expert data. This provides strong evidence for the scalability of metric learning methods in real-world data.\\n\\n\\nFurthermore, regarding the reviewer's comment that our paper may have limited impact on the machine learning community, we believe that our work can still positively influence the field for the following reasons:\\n\\n\\n(1) We have gathered similarity sampling data from 121 experts (clinical doctors), which required considerable time and effort, unlike experiments conducted on the general population. Expert-level behavioral sampling data in the human-in-the-loop machine learning field is rare, and we believe this data can be widely used for validating machine learning methodologies. We plan to make all the data publicly available through ICLR 2025. If the high-cost data collection is recognized as an independent contribution, it could motivate related research institutions to actively participate in data collection and sharing. We believe this will encourage more research utilizing high-cost data in the machine learning community.\\n\\n\\n(2) One of our contributions is presenting a new application of autoencoders. We offer experimental evidence that the flexibility of unsupervised learning can be applied to cognitive science modeling. This may stimulate further research on autoencoders that control manifold learning, potentially advancing this line of study.\\n\\n\\n2. Exploration of Benchmark Dataset Application and Generalizability (Weakness 2 and Question 2)\\n\\nWe greatly appreciate the important comment. We agree with the point that experimental validation using other datasets is necessary to rigorously demonstrate generalizability. Unfortunately, during the review period of this paper, it is physically difficult to conduct additional experiments with other datasets. However, given the purpose of our work, we approached the use of benchmark datasets with some caution. The reason is that many benchmark datasets tend to show little variation in the similarity patterns between individuals. In the case of benchmark datasets, there are conventional metrics for determining what data is perceived as similar or not. Even if individuals make judgments that deviate from these criteria for certain features, the overall trend remains consistent. This is because benchmark datasets typically have clear labels, and these labels serve as the key criteria for determining similarity metrics. While such benchmark datasets are reasonable for conventional metric learning (which aims to learn the average similarity metric across a population), it is uncertain whether they are suitable for person-specific metric learning. In contrast, expert data is more complex and uncertain, with highly varied individual similarity recognition patterns. Therefore, to explore the generalizability of our framework, experiments based on more complex data or expert-driven datasets might be necessary. We would like to respectfully note that conducting experiments with expert subjects and expert data requires significant costs, making it quite challenging to perform experiments across multiple expert domains simultaneously. We believe that if contributions from single expert experiments are acknowledged, it will encourage future researchers to carry out various behavior experiments targeting expert datasets.\\n\\nAdditionally, while limited, we would like to emphasize that, in exploring generalizability, we independently used the CXR-A and CXR-B datasets, which have substantially different characteristics (i.e., different distributions) in terms of image properties. Furthermore, to carefully address the possibility of overclaiming our argument regarding generalizability, we are considering adding the qualifier \\\": focusing on medical images\\\" to the title of our paper.\"}",
"{\"summary\": \"This paper presents a new method to model individual ways of understanding and interpreting medical images. In the filed of medicine, experts often see images differently based on their personal experiences and knowledge. Therefore, this paper aims to capture these unique perspectives by creating custom models for each doctor that reflect how they personally perceive similarities in medical images. To achieve this, the authors combine supervised learning for views image similarity, and unsupervised learning with an autoencoder to build a broader model of each doctor's cognitive pattern without requiring labels. In addition, the model uses triplet loss to help the system understand which images a doctor sees as more similar or different from each other. The authors conducted an extensive experiment with 121 doctors, asking them to judge the similarity between chest X-rays.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe paper is well-structured, with clear explanations of the proposed framework and its components. The authors provide a thorough background, outlining the challenges in modelling person-specific cognitive similarities and the limitations of existing methods.\\n2.\\tThe research is underpinned by a robust experimental design involving 121 clinical physicians, providing a substantial dataset for analysis. The methodology is meticulously detailed, ensuring reproducibility and transparency. The integration of behavioural sampling to capture each participant's perception of similarity among chest X-ray images adds depth to the study, reinforcing the reliability of the findings.\", \"weaknesses\": \"1.\\tIn Section 3.1, the authors implemented the measurement of cognitive similarity through a triangular arrangement of images, where physicians arrange images based on perceived closeness. However, this may lack depth in capturing nuanced interpretative differences. This approach does not account for context-dependent interpretation, such as how a physician might consider patient demographics or clinical history when assessing similarity. Therefore, the authors can benefit from incorporating more sophisticated cognitive tests or context-dependent tasks that could improve the understanding of the factors that influence these cognitive patterns. This added information could be used to fine-tune the embedding model.\\n2.\\tThe authors use a convolutional autoencoder for CXR images and prove its efficiency. However, testing alternative architectures \\u2014 like ViT (Dosovitskiy et al., 2020) \\u2014 could provide insights into which architectures best capture complex cognitive similarities. Moreover, adding architectural flexibility or adaptivity within the model, perhaps by using modular components that can adjust based on data type, would make the framework more broadly applicable.\\n3.\\tThe paper lacks comparisons with alternative embedding models that could serve as baselines. Without baselines, it is difficult to understand whether the proposed autoencoder with variable triplet loss truly excels over other methods. \\n\\nDosovitskiy, Alexey et al. \\u201cAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.\\u201d ArXiv abs/2010.11929 (2020): n. pag.\", \"questions\": \"1. Could you give a more detailed explanation of the variable triplet loss function? And why do you define such variable triplet loss and what mechanisms allow it to adapt to individual cognitive patterns?\\n2. While the study focuses on chest X-ray images, have you considered applying this approach to other medical imaging modalities, such as MRI or CT scans? If so, what adaptations would be necessary to accommodate the distinct characteristics of these modalities?\\n3. Could you elaborate on the decision to utilize CNNS for the autoencoder component instead of Transformer-based architectures? Given that Transformers have demonstrated effectiveness in capturing long-range dependencies and global context in image processing tasks, what were the considerations that led to favouring CNNs in this context? Additionally, how was the network architecture determined to ensure optimal performance in fine-grained image processing tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer,\\n\\nThank you for your patience. The revised version of the paper has been updated. \\nPlease note that there may be additional updates before the final deadline, as we continue to incorporate further feedback and analysis.\\n\\nRegarding your concerns, the following changes have been made in the revised version:\\n\\n1. Revised Section 2.2 - \\\"PERSON-SPECIFIC COGNITIVE SIMILARITY MODELING\\\": \\n\\nIn response to your comment, we have extensively rewritten this section to cover a broader range of work related to personalization. We highlight that numerous feature engineering studies have been conducted for individual-level embedding modeling. However, we point out that these existing studies have not addressed practical issues such as the lack of individual-specific sampling in person-specific modeling. We emphasize that our approach, combining unsupervised feature learning, aims to resolve the issue of limited individual-specific data, which has been largely overlooked in previous research. Additionally, we clarify that previous embedding modeling studies primarily used benchmark datasets with minimal inter-individual variation in similarity metrics, whereas our study is novel in utilizing expert data with significant metric differences at the individual level.\\n\\n2. Modification of Big Claim: \\n\\nThe claim on Line 101 has been revised from \\\"first large-scale experimental study to model individual-level psychological embeddings\\\" to \\\"first expert-based experimental study to model individual-level psychological embeddings\\\" to provide more specific context.\\n\\n3. Removal of Unnecessary Text:\\n\\n We have removed unnecessary text from Lines 101-107.\\n\\nPlease be aware that additional revisions may still be made.\\n\\nKind regards,\"}",
"{\"metareview\": \"This study explores individual psychological embeddings in expert domains, such as medicine, where cognitive similarities vary among professionals. This paper initially received mixed scores, and the authors did not fully address the reviewer's concern. Specifically, there are two critical issues that remain unresolved:\\n\\n1. **Lack of External Validation:** The paper does not include external validation using benchmark datasets, which is essential to assess the generalizability and robustness of the proposed method. This absence raises questions about the applicability of the findings in broader contexts.\\n\\n2. **Technical Issues and Required Revisions:** The reviewers identified several technical shortcomings that need to be corrected. Addressing these would necessitate a comprehensive rewrite of the paper, which currently renders it unsuitable for publication in its existing form.\\n\\nIn light of these concerns, I recommend rejecting this submission. I hope the reviewers' comments improve the quality of this paper. The authors are strongly encouraged to address these points thoroughly before resubmitting.\", \"additional_comments_on_reviewer_discussion\": \"My decision is based on the following key issues:\\n\\n- reviewer `xoWe ` mentioned that external validation is necessary on benchmark datasets, and how the proposed method improves Sota active learning techniques remains unclear.\\n\\n- Reviewer` tueP` mentioned that a significant rewrite of the work should be required to evaluate it as a new manuscript to clarify the novelty of the proposed method, the experiment significance, the justification for modeling individual data using embedding, etc.\\n\\nAlthough this paper presents an interesting idea, the current version is still too immature to be polished. I have to recommend rejecting this paper for now. However, the authors are strongly encouraged to include the response in the main paper, systematically reorganize it, and then submit it to a future conference or journal.\"}",
"{\"title\": \"We deeply appreciate your constructive review comments (Response 1)\", \"comment\": \"We deeply appreciate your constructive review comments. We hope the responses below address your concerns effectively.\\n\\n\\n1. Concerns regarding sampling information (Weakness 1)\\n\\nWe agree with the reviewer\\u2019s opinion. If our work were considered a practical medical application, our sampling methodology might not fully capture subtle interpretative differences. However, our primary goal is to bridge the gap between metric learning and modeling physicians\\u2019 similarity judgments. Our objective is not direct medical application, but rather using real-world data to test hypotheses in the context of cognitive science applications of metric learning.\\n\\nSpecifically, we aim to demonstrate the feasibility of personalized embedding models using real-world data, not benchmark datasets, in realistic scenarios. To achieve this, we first validated our hypothesis in the simplest possible setting, as there is no evidence to date that personalized embedding modeling is feasible even in single-modality expert data.\\n\\nIn our study, physician participants judged similarities based solely on CXR images, without additional clinical information. If personalized embedding can be achieved under these conditions, it could lead to future research incorporating contextual information or metadata into embedding models, as the reviewer suggested.\\n\\nThis simple setup is key to demonstrating the generalizability of our approach. If we can model person-specific similarity using just images, without relying on domain-specific assumptions (e.g., medical histories), it could show that this framework applies to other expert domains, even in single-modality scenarios.\\n\\nWe appreciate the reviewer\\u2019s insights and will consider them as we expand this research for medical domain applications.\\n\\n\\n2. Variable Triplet Loss\\n\\nThank you for raising such an important question. Our Variable Triplet Loss is not an innovative development but rather a practical modification of the traditional triplet loss proposed in previous studies. In conventional machine learning, metric learning typically learns a general metric that averages similarity judgments across many data points. In contrast, for high-complexity data (e.g., medical images), individual similarity patterns can vary significantly, meaning that metrics should be sampled on an individual basis. However, it is difficult to sample many data points from a single individual, and there is high uncertainty in the information obtained from these samples. To address this issue of individual similarity modeling, we modified the traditional loss function to enhance its practical utility. Below, we compare the traditional triplet loss with our Variable Triplet Loss:\", \"traditional_triplet_loss\": \"$L(A, P, N) = \\\\max \\\\left( \\\\left| f(A) - \\\\hat{f(P)} \\\\right|^2 - \\\\left| f(A) - \\\\hat{f(N)} \\\\right|^2 +margin, 0 \\\\right)$\", \"our_triplet_loss\": \"$L(A, P, N) = \\\\max \\\\left( \\\\alpha \\\\left| f(A) - \\\\hat{f(P)} \\\\right|^2 - \\\\beta \\\\left| f(A) - \\\\hat{f(N)} \\\\right|^2, 0 \\\\right)$\\n\\nFor comparison, please note that the terms in our Equation (1) in the paper correspond to the traditional loss function as follows:\\n\\nC (closed) \\u2192 P (positive), D (distant) \\u2192 N (negative), A (anchor) \\u2192same, E() \\u2192 f().\", \"our_loss_function_has_two_main_advantages_for_learning_individual_embeddings\": \"(1) Absence of margin term:\\n\\nIn traditional triplet loss, there is an explicit margin term that forces the representation distance between positive samples to be closer than the distance between negative samples. However, our loss function does not use a margin but instead uses weighting terms \\ud835\\udefc and \\ud835\\udefd to encourage the representations of positive samples to be closer to the anchor than those of negative samples. However, unlike traditional loss, we do not enforce the representation distance between positive samples to be strictly closer than that of negative samples. This is significant because, unlike typical metric learning, where average trends are learned, individual similarity data for expert datasets is sparse and could reflect errors or outliers. For instance, when sampling similarity from 100 sets, 5-10 of these samples may not align with the individual's general tendency due to human error. In such cases, it is important not to force the model to learn outliers as part of the general tendency through a margin, but instead to ensure that data points that deviate from the overall trend are not included in the embedding model.\\n\\n(2) Constant embedding for Positive and Negative samples during training:\\n\\nIn our triplet loss, the values of $\\\\hat{f(P)}$ and $\\\\hat{f(N)}$ are treated as constants calculated from the model of the previous epoch. In contrast, traditional triplet loss treats ${f(P)}$ and ${f(N)}$ as variables to be learned. Our setup is based on the observation that when triplet loss is combined with the reconstruction loss of an autoencoder, it is empirically more stable to treat $\\\\hat{f(P)}$ and $\\\\hat{f(N)}$ as constrants and only learn $f(A)$.\"}",
"{\"comment\": \"I appreciate the authors' effort in rebuttal. Most of my concerns have been addressed.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"I appreciate the authors' detailed responses to my previous review. They've clarified the motivation behind their work and highlighted the potential contributions, particularly concerning the expert-level dataset and the novel application of autoencoders.\\n\\nHowever, I still believe that demonstrating the generalizability of the proposed method is crucial for its acceptance. External validation on benchmark datasets, even if those datasets aren't perfectly suited for person-specific learning, would significantly strengthen the paper. Additionally, a more in-depth analysis of how the method connects to and potentially improves existing active learning techniques would solidify its position within the machine learning literature. While I maintain my current score, I am open to reconsidering it if the authors can convincingly address these remaining concerns.\"}",
"{\"comment\": \"We would like to express our sincere gratitude once again for the constructive comments from the reviewer and the reevaluation of our paper.\"}",
"{\"title\": \"Resoonse\", \"comment\": \"I have updated my review and score in response to the updated manuscript. Thank you.\"}",
"{\"title\": \"We deeply appreciate the thorough review comments (Response 1)\", \"comment\": \"We deeply appreciate that the reviewer has thoroughly understood and carefully reviewed the details of our manuscript, providing invaluable feedback to improve our work. Personally, I have learned a lot from your comments.\\nI hope the following response addresses your concerns.\\n\\n\\n1. Novelty of the embedding workflow\\n\\n\\nThank you for providing a detailed account of prior related works that we may have overlooked. We will cited all the studies you mentioned in our paper and will take them into careful consideration for our future work. We would like to emphasize again that the novelty of our study lies in modeling expert data with high complexity. Most existing studies use benchmark datasets or data with lower complexity, where inter-human similarity perception tends to show minimal variability. To the best of our knowledge, no prior work has provided evidence that the patterns of similarity perception differ significantly among individuals when using the datasets employed in those studies.\\n\\nIn contrast, we specifically selected datasets where similarity perception is more likely to vary across individuals, and we provided evidence supporting this claim (Sec 4.3). Moreover, conducting behavioral experiments with experts is inherently challenging and rare, and we believe that this independent contribution should be acknowledged to encourage further work despite the high costs and risks associated with such studies. The expert behavioral data we are making publicly available through ICLR will, we believe, serve as a valuable resource for researchers in this field.\\n\\n\\n2.\\tRationale for Using Autoencoders Instead of Encoders\\n\\nWe agree with the reviewer\\u2019s comment that, considering the properties of CNNs, an encoder alone can extract substantial visual cognitive features. However, in general, training an encoder requires labeled data, which could constrain the type of features it learns. Since humans, especially in the case of expert data, do not necessarily judge the similarity of data based solely on labels, it seems difficult to definitively conclude that features learned solely by an encoder are a better alternative than an autoencoder for person-specific embedding learning.\\n\\nWhile autoencoders have limitations in learning based on the features necessary for image reconstruction, their label independence suggests that they may uncover diverse manifolds that could provide a better understanding of image similarities. We are not claiming that the autoencoder is the optimal architecture for person-specific embedding learning. Rather, we wish to emphasize that unsupervised learning, independent of labels, when combined with triplet loss, has shown the potential to amplify limited individual human behavior sampling information.\\n\\nWe apologize if our explanation did not adhere to strict mathematical definitions. We will make an effort to include comparative results with encoder-only models in the revised version of the manuscript, which we plan to upload within the next few days.\\n\\n\\n3. Justification for modeling individual data \\n\\nWe would like to emphasize that the ultimate goal of this work goes beyond simply identifying different individuals at the behavioral level. For example, experts may possess vastly different knowledge and skills. If there is an expert with a very high level of expertise, we could potentially gain inspiration for implementing high-performance machine learning models by reconstructing the expert's embedding. Alternatively, for the individualized, customized learning of expertise, it may crucial to identify which data points hold high uncertainty for an individual. Since the uncertainty of highly similar data tends to be similar, if we can reconstruct an individual's embedding, we could refine the individual's learning through personalized uncertainty estimation.\\n\\n4. Interpretability analysis\\n\\nWe apologize for any confusion caused. In the revised version, we will move the details to the methods section and quantify the part that was previously only qualitatively assessed through visuals, presenting it in a table format. As the reviewer pointed out, the novelty of the masking procedure cannot be considered as part of our contribution. However, as intended, the masking experiment serves as an auxiliary analysis to demonstrate the validity of our methodology, and we kindly ask that you consider this in the context of our intention not to emphasize the novelty of the masking methods themselves.\"}",
"{\"title\": \"Response to Rebuttal\", \"comment\": \"I would like to thank the authors for their rebuttal. I do feel that their thoughtful comments address some of my comments on the paper. I would be willing to consider changing my score; however, I cannot see a revised version of the paper uploaded and as such it is a little hard to evaluate the actual modifications that have been implemented. If the authors have upload a revision can they let me know? Ideally that revision would have marked up changes to make it easier to evaluate.\"}",
"{\"comment\": \"The authors' responses are appreciated, which has solved part of my concerns. However, after reading the other reviewers' comments, I decided to keep my score for now.\"}"
]
} |
EQgEMAD4kv | CAKE: Cascading and Adaptive KV Cache Eviction with Layer Preferences | [
"Ziran Qin",
"Yuchen Cao",
"Mingbao Lin",
"Wen Hu",
"Shixuan Fan",
"Ke Cheng",
"Weiyao Lin",
"Jianguo Li"
] | Large language models (LLMs) excel at processing long sequences, boosting demand for key-value (KV) caching. While recent efforts to evict KV cache have alleviated the inference burden, they often fail to allocate resources rationally across layers with different attention patterns. In this paper, we introduce Cascading and Adaptive KV cache Eviction (CAKE), a novel approach that frames KV cache eviction as a ``cake-slicing problem.''
CAKE assesses layer-specific preferences by considering attention dynamics in both spatial and temporal dimensions, allocates rational cache size for layers accordingly, and manages memory constraints in a cascading manner. This approach enables a global view of cache allocation, adaptively distributing resources across diverse attention mechanisms while maintaining memory budgets.
CAKE also employs a new eviction indicator that considers the shifting importance of tokens over time, addressing limitations in existing methods that overlook temporal dynamics.
Comprehensive experiments on LongBench and NeedleBench show that CAKE maintains model performance with only 3.2\% of the KV cache and consistently outperforms current baselines across various models and memory constraints, particularly in low-memory settings. Additionally, CAKE achieves over 10$\times$ speedup in decoding latency compared to full cache when processing contexts of 128K tokens with FlashAttention-2. Our code is available at https://github.com/antgroup/cakekv. | [
"Large Language Model",
"Efficient Generative Inference",
"Key-Value Cache"
] | Accept (Poster) | https://openreview.net/pdf?id=EQgEMAD4kv | https://openreview.net/forum?id=EQgEMAD4kv | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zv9YTbezjK",
"wkBn0zGAcB",
"thIk5sJvtV",
"saSHU4SCRG",
"pecUFD2mBy",
"pVPA0GYYWX",
"oKQZ98U2Cf",
"mD7AD5fKRr",
"jLWpMd8w85",
"ihbaLj8N1O",
"YrU1TNoUQT",
"UB1fOtdAZY",
"TKeGdKwWkp",
"NJeh2JMgIy",
"N1kTmIMX1W",
"JfciTs2uvv",
"BpEPEx3IxU",
"AoCGZ1SwWK",
"9ydp42Flae",
"9JsK0xnV2P",
"88fDjBtjNZ",
"5xaOFowZrj"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1737523986153,
1732707501449,
1732658599195,
1732360282300,
1732707377731,
1732539413626,
1732658477231,
1732363391908,
1732658564879,
1734998205919,
1732370828417,
1732666056415,
1730524916908,
1732624191924,
1730576639544,
1732367283385,
1732632384480,
1731287506036,
1732359868920,
1730547812505,
1732663137816,
1732365456877
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9488/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9488/Area_Chair_Lr3e"
],
[
"ICLR.cc/2025/Conference/Submission9488/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9488/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9488/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9488/Area_Chair_Lr3e"
],
[
"ICLR.cc/2025/Conference/Submission9488/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9488/Area_Chair_Lr3e"
],
[
"ICLR.cc/2025/Conference/Submission9488/Area_Chair_Lr3e"
],
[
"ICLR.cc/2025/Conference/Submission9488/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9488/Reviewer_zgXb"
],
[
"ICLR.cc/2025/Conference/Submission9488/Reviewer_zgXb"
],
[
"ICLR.cc/2025/Conference/Submission9488/Reviewer_BrWg"
],
[
"ICLR.cc/2025/Conference/Submission9488/Reviewer_LJXW"
],
[
"ICLR.cc/2025/Conference/Submission9488/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9488/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9488/Reviewer_rCQd"
],
[
"ICLR.cc/2025/Conference/Submission9488/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9488/Reviewer_BrWg"
],
[
"ICLR.cc/2025/Conference/Submission9488/Reviewer_LJXW"
],
[
"ICLR.cc/2025/Conference/Submission9488/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response to reviewer zgXb\", \"comment\": \"We're delighted that our additional experiments and explanations have sufficiently addressed your concerns. Once again, we would like to express our sincere gratitude for your positive feedback and for contributing to our manuscript.\"}",
"{\"comment\": \"Dear reviewer zgXb,\\n\\nCould you please respond to authors' rebuttal and see if you would like to update your review? Thanks very much!\\n\\nAC\"}",
"{\"title\": \"Response to reviewer zgXb, part2\", \"comment\": \">#### **W4**: Evaluation on more backbones.\\n\\nFollowing your advice, we have incorporated experiments on two additional backbones, Qwen and Gemma, to further strengthen our analysis. Specifically, we conducted comparisons under both low-budget settings ($B_{\\\\text{total}} = 128L$) and high-budget settings $(B_{\\\\text{total}} = 1024L)$ across 16 datasets on LongBench. The average results are presented as follows, with detailed results provided in **Appendix F.2**.\\n\\nMethod | Qwen2.5-7B-Instruct | Gemma-7B-Instruct\\n-------|---------------------|-------------------\\nFull Cache | 48.87 | 34.09\\n**Cache size = 128L** | |\\nStreamingLLM | 31.32 | 25.68\\nH2O | 38.3 | 30.49\\nTOVA | 38.0 | 31.24\\nSnapKV | 40.09 | 31.57\\nPyramidKV | 36.83 | 30.61\\nCAKE (ours) | **41.68** | **32.38**\\n**Cache size = 1024L** | |\\nStreamingLLM | 37.38 | 29.73\\nH2O | 44.14 | 33.32\\nTOVA | 46.23 | 34.07\\nSnapKV | 47.27 | 34.05\\nPyramidKV | 45.85 | 33.44\\nCAKE (ours) | **47.70** | **34.18**\\n\\nAs can be seen, CAKE consistently outperforms other baselines across both low-memory and high-memory scenarios, even with the inclusion of the new backbones. Notably, Gemma with $B_{\\\\text{total}} = 1024L$ achieves a performance that surpasses the full-cache baseline (34.18 vs. 34.09). In addition to expanding the range of model architectures, we also conducted experiments with larger model sizes, including 13B, 32B, and 70B. CAKE continues to deliver the best performance in these settings, with detailed experimental results available in **Appendix F.2**. We believe these results highlight the effectiveness and generalizability of our proposed approach.\\n\\n>#### **W5**: Code Reproducibility\\n\\nWe appreciate your valuable feedback regarding code availability and fully understand the importance of open-sourcing for ensuring reproducibility. To address this, we are actively preparing the code and relevant documentation for public release. We will ensure that our work can be fully reproduced by the research community and plan to make the codebase available upon the acceptance of this paper.\\n\\n>#### **W6**: Comparison with quantization methods.\\n\\nWe appreciate the reviewer's suggestion. It's important to note that CAKE focuses on KV cache eviction through dropping unimportant KV pairs, which is orthogonal to KV cache quantization methods that aim to reduce storage overhead through bit reduction. Both CAKE and quantization methods can be jointly used with flash-attention. More importantly, CAKE is compatible with quantization methods to pursue more efficient KV cache storage as we evaluate in the following part. We have conducted additional experiments on Llama2-7B-Chat comparing CAKE with two typical quantization methods: 1) KIVI[1], a state-of-the-art KV cache quantization method that adopts asymmetric KCVT quantization (quantizes Key cache per-channel and Value cache per-token, similar patterns are also adopted by KVQuant [2] and GEAR [3]), and 2) KCVC, which quantizes KV cache both per-channel for efficiency. Due to time constraints during rebuttal, we compare with KIVI as a representative method, since it shares similar quantization schemes with KVQuant and GEAR. The experimental results on LongBench are summarized as follows:\\n\\nMethod | Compression ratio | Avg.\\n-------|------------------|------\\nFull Cache (16 bit) | - | 33.07\\n**CAKE** | 50% | **33.23**\\nKCVC (4 bit) | 25% | 32.72\\nKIVI (4 bit) | 25% | 32.71\\n**CAKE** | 25% | **32.91**\\nKCVC (2 bit) | 12.5% | 23.71\\nKIVI (2 bit) | 12.5% | 32.17\\n**CAKE** | 12.5% | **32.32**\\nKIVI (4 bit) + **CAKE** | 12.5% | **32.51**\\nKIVI (4 bit) + **CAKE** | 6.75% | **32.48**\\n\\nFor a fair comparison, we evaluate different methods under the same compression ratios. CAKE achieves better performance (33.23) than full cache (33.07), at a compression ratio of 50%; At 25% and 12.5% compression ratios, CAKE consistently outperforms both KIVI and KCVC; Most importantly, combining CAKE with KIVI-INT4 achieves better results at 12.5% and 6.25% compression ratios compared to KIVI-INT2 alone. These results validate that: **(a)** CAKE is effective as a standalone method, **(b)** CAKE can work synergistically with quantization approaches, **(c)** The combination enables even higher compression ratios while maintaining performance. We have added this discussion in **Appendix E**.\\n\\n[1] Liu, Zirui, et al., \\\"KIVI: Plug-and-Play 2bit KV Cache Quantization with Streaming Asymmetric Quantization.\\\" (2024).\\n\\n[2] Hooper, Coleman, et al., \\\"KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization.\\\" arXiv preprint arXiv:2401.18079 (2024).\\n\\n[3] GEAR: An Efficient KV Cache Compression Recipe for Near-Lossless Generative Inference of LLM.\"}",
"{\"title\": \"Response to reviewer LJXW\", \"comment\": \"Thank you for your valuable feedback! We have revised the manuscript to better emphasize our key contributions. We're delighted that our additional experiments and explanations have sufficiently addressed your concerns. Once again, we would like to express our sincere gratitude for your positive feedback and for contributing to our manuscript.\"}",
"{\"title\": \"Kind Request for Discussions and Feedback for Paper 9488\", \"comment\": \"Dear Reviewers,\\n\\nWe are deeply grateful for your thorough reviews and valuable feedback on our paper.\\n\\nAs the discussion period nears its end, we hope our responses have effectively addressed the points you raised. Your insights have been instrumental in strengthening our work, and we welcome your continued engagement.\\n\\nThank you for your dedication and expertise.\\n\\nBest regards,\\n\\nAuthors of Paper 9488\"}",
"{\"comment\": \"Dear reviewer rCQd,\\n\\nCould you please respond to authors' rebuttal and see if you would like to update your review? Thanks very much!\\n\\nAC\"}",
"{\"title\": \"Response to reviewer rCQd\", \"comment\": \"We sincerely appreciate your thorough review and positive assessment of our work. Your constructive feedback is invaluable in helping us improve both the technical content and presentation clarity of our paper. We have carefully addressed each of your comments below, with the corresponding modifications **highlighted in blue** in the revised manuscript.\\n\\n> **W1&W2**: Suggestions on Terminology and Wording\\n\\n We appreciate your valuable feedback on our writing. We agree with both suggestions and will revise \\\"KV\\\" to \\\"Key-value (KV)\\\" on its first appearance in the abstract. We also concur that replacing \\\"optimally\\\" with \\\"adaptively\\\" more accurately reflects our method's capabilities. Thank you for helping us improve the precision and clarity of our paper. These changes have been incorporated in the revised version of our paper.\\n\\n> **W3**: Clarification on Equation (4).\\n\\n We regret the error in our previous statement. The correct operation involves transposing $\\\\text{log}\\\\mathbf{A}[i, :]$ and then computing the inner product with $\\\\mathbf{A}[i, :]$ by using the expression $\\\\mathbf{A}[i, :]\\\\text{log}(\\\\mathbf{A}[i,:])^T$. We have made the necessary correction in our paper and are grateful for your meticulous review.\\n\\n> **W4**: Clarification on Theorem 1. \\n\\nThank you for your careful review of the mathematical presentation. We have changed \\\"Theorem 1\\\" to \\\"Proposition 1\\\" given its straightforward proof. In Proposition 1, \\\"For layer $l\\u2208[L]$\\\" means the allocated budget size decreases monotonically from stage $l$ to $L-1$ for any layer with index in $[0,1,...,L-1]$. We have modified this to \\\"For any layer $l\\u2208[L]$\\\" for better clarity.\\n\\n> **W5 & Q1**: Could you give more analysis in which kind of benchmark datasets can CAKE obtain better experimental results than the existing baselines?\\n\\nWe appreciate your insightful question. This is an excellent point that helps us better articulate the comprehensive strengths of CAKE across diverse benchmarks. Our analysis demonstrates CAKE's robust performance and specific advantages in different types of evaluations:\\n\\nOn LongBench tasks (detailed in Table 1 and **Appendix F**), CAKE demonstrates consistent performance improvements across various cache sizes and task types. While other methods occasionally show marginal advantages in specific cases, none exhibits consistent superiority across all conditions, highlighting CAKE's robust general performance.\\n\\nFor NeedleBench (detailed in **Appendix G**), CAKE's advantages become more pronounced in complex tasks, particularly in Multi-Needle Retrieval. For instance, compared to the previous SOTA SnapKV, CAKE achieves significantly better results on Mistral: 71.00 vs. 56.93 ($B_\\\\text{total}=1024L$, **lines 1340-1342**) and 48.55 vs. 19.14 ($B_\\\\text{total}=512L$\\uff0c**lines 1344-1346**) as shown in **Table 10** of Appendix G. Such advantage becomes increasingly pronounced as the cache budget decreases. CAKE's superior performance in these challenging scenarios stems from its design, which considers both long-term significance and short-term fluctuations in information relevance. In contrast to existing methods, CAKE employs a more nuanced approach, maintaining a more comprehensive and balanced representation of contextual information, instead of depending solely on static attention score or fixed cache allocation strategies, thus avoiding premature discarding of information that could be vital for subsequent complex retrievals.\\n\\nNevertheless, as an eviction strategy, under extremely constrained budgets, CAKE's performance inevitably experiences some degradation. To address this, we suggest combining KV Cache quantization with CAKE eviction. Our experiments illustrate that this hybrid approach proves adept at maintaining performance under severe memory limitations, as detailed in **Appendix E**.\"}",
"{\"comment\": \"Dear reviewer LJXW,\\n\\nCould you please respond to authors' rebuttal and see if you would like to update your review? Thanks very much!\\n\\nAC\"}",
"{\"metareview\": \"All reviewers agreed the paper proposed a useful contribution to KV cache eviction strategy to speed up decoding process of LLM.\", \"strength\": \"1. The proposal method appeared to be novel and useful.\\n2. Good experimental results\", \"weakness\": \"1. Results were only on small models, limiting its impact on practical use with larger models (more results came in rebuttal period).\\n2. Baseline for comparisons were not particulartly strong. (Improvements were made during rebuttal.)\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers actively participated in the discussion and concerns were mostly addressed by the rebuttal. (I discounted rating 10 as the comments were mostly subjective in the strength section.)\"}",
"{\"title\": \"Summary and general reply to the reviewers\", \"comment\": \"We sincerely appreciate all reviewers' time and efforts in reviewing our paper. Your constructive feedback has substantially helped improve the quality of our work.\", \"we_are_particularly_encouraged_that_the_reviewers_have_recognized_our_key_contributions_in_several_aspects\": [\"Novel and Practical Contribution (Reviewer-rCQd, BrWg, zgXb)\", \"Strong Technical Foundation (Reviewer-rCQd, LJXW, BrWg)\", \"Comprehensive Empirical Validation (Reviewer-rCQd, LJXW, BrWg)\", \"Clear Presentation and Organization (Reviewer-rCQd, LJXW, BrWg, zgXb)\", \"In response to the reviewers' suggestions, we have made the following major improvements:\", \"Extended evaluation to **additional LLM architectures** (Qwen and Gemma), with detailed results provided in **Appendix F.2**. (Reviewer-zgXb)\", \"Conducted experiments on **larger models** ranging from 13B to 70B parameters (Llama2-13B, Qwen2.5-32B, and Llama3-70B), with comprehensive results presented in **Appendix F.3**. (Reviewer-LJXW, BrWg)\", \"Added discussion on orthogonal KV cache quantization methods in **Appendix E**, demonstrating that our work is **compatible with and complementary** to these approaches. (Reviewer-zgXb)\", \"Addressed all other clarification requests from the reviewers. (Reviewer-rCQd, zgXb)\", \"All major modifications have been **highlighted in blue** in the revised manuscript for easy reference. We believe these changes have significantly strengthened our paper and addressed the reviewers' concerns.\"]}",
"{\"title\": \"Thanks for your responses\", \"comment\": \"I would like to thank the authors for the detailed responses to my questions, especially for conducting large amounts of additional experiments, which is very impressive. I will raise my score accordingly.\"}",
"{\"summary\": \"This work introduces CAKE, a method for efficient KV cache management in LLMs that enhances inference by dynamically allocating cache based on each layer\\u2019s spatial and temporal attention demands. By framing cache eviction as a cake-slicing problem, CAKE optimally distributes resources across layers and incorporates a novel eviction indicator to account for the shifting importance of tokens over time. Extensive experiments show the potentials of CAKE.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Pioneering Adaptive Memory Budgets: CAKE is the first work to consider adaptive memory budgets for different layers of LLMs. This innovative approach allows for more efficient memory utilization, improving model performance by allocating resources where they are most needed.\", \"Addressing KV Cache Compression: The paper tackles the timely problem of KV cache compression in LLMs, which is especially relevant for on-device applications. By focusing on this issue, the work makes LLMs more practical and accessible in resource-constrained environments.\", \"Clear Writing: The overall writing is clear, though the paper structure is a bit chaotic. This clarity facilitates comprehension, replication, and further research based on the paper's findings.\"], \"weaknesses\": [\"The motivation for using spatial dispersion and temporal shift in cache size allocation is unclear. Providing more insights or intuition would help clarify its benefits. Additionally, Table 2 shows that the adaptive allocation strategy provides minimal improvement, suggesting it may not be necessary.\", \"The adaptive KV compression method is incompatible with flash-attention. Given that flash-attention is widely adopted for efficient training and inference of LLMs, it\\u2019s unlikely that practitioners would choose CAKE over flash-attention in practice.\", \"Only two LLM backbones, Llama and Mistral, were evaluated, which may be insufficient. Consider adding another backbone, such as Phi, Qwen, or Gemma, to strengthen the analysis.\", \"The team has not open-sourced their code, which could raise concerns about the reproducibility of their work.\", \"The baseline methods used for comparison are not sufficiently strong. Consider including the following more robust methods (For KVQuant[1] and KIVI[2] should be jointly used with the flash-attention, and GEAR could remain the original settings):\", \"[1] Hooper, Coleman, et al., \\\"Kvquant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization.\\\" arXiv preprint arXiv:2401.18079 (2024).\", \"[2] Liu, Zirui, et al., \\\"KIVI: Plug-and-Play 2bit KV Cache Quantization with Streaming Asymmetric Quantization.\\\" (2024).\", \"[3] GEAR: An Efficient KV Cache Compression Recipe for Near-Lossless Generative Inference of LLM.\"], \"questions\": \"Please refer to the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for the responses. I believe this paper deserves more recognition.\"}",
"{\"summary\": \"This paper introduces a method for optimizing KV cache eviction through a cache allocation strategy to enhance LLM inference efficiency. The proposed cache allocation adapts to layer preferences, adjusting KV cache injection to improve efficiency while maintaining satisfactory performance. Extensive experiments are conducted to demonstrate the method\\u2019s effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written, with clear motivations for the proposed method and pipeline illustrations. Incorporating layer-wise preference modeling to guide KV caching strategies is intuitive, given the insights from attention dynamics analysis.\\n\\nThe proposed method is straightforward and compatible with existing KV caching strategies, making it easy to integrate while achieving decent efficiency. The authors provide ample experiments to substantiate this point.\\n\\nEmpirical analysis is comprehensive, covering 16 tasks with various LLMs of different specifications, offering a comprehensive evaluation.\", \"weaknesses\": \"Most empirical analyses focus on smaller LLMs with 7B-8B parameters, which may limit the generalizability of this approach for much larger LLMs. Specifically, it would be valuable to see how computational costs and performance are impacted across different LLM sizes.\\n\\nThe empirical improvements over existing baselines are relatively modest, which could suggest limited practical advantages in some cases.\", \"questions\": \"Please see weakness aspects above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer BrWg\", \"comment\": \"We are deeply grateful for your thorough review and strong endorsement of our work's technical novelty, presentation quality, and empirical analysis. We appreciate your insightful questions, which we have carefully addressed below.\\n\\n> **Q1:** How this technique could scale to larger models.\\n\\nThis question is of such high quality that it motivates us to delve into the potential of larger models, even though it was not a requirement from the reviewer.\\n\\nAlthough attention patterns vary significantly across different model architectures and sizes, our method can universally analyze their spatial and temporal attention characteristics and adaptively accommodate different model scales and architectures. This adaptability enables our approach to demonstrate robust performance across various model sizes and architectures. To empirically validate this generalizability, we have conducted additional experiments on larger models including Llama2-13B-Chat, Qwen2.5-32B-Instruct, and Llama3-70B-Instruct. We evaluated both low-budget (*128L*) and high-budget ($1024L$) settings on Longbench. The table below presents the average scores across 16 datasets (detailed results are provided in **Appendix F.3**).\\n\\n| Method | Llama2-13B-Chat | Qwen2.5-32B-Instruct | Llama3-70B-Instruct |\\n|:---:|:---:|:---:|:---:|\\n| Full Cache | 29.95 | 48.39 | 45.79 |\\n|**Cache budget = 128L**|\\n| StreamingLLM | 22.75 | 33.33 | 37.53 |\\n| H2O | 25.15 | 38.96 | 40.13 |\\n| TOVA | 25.01 | 40.08 | 34.76 |\\n| SnapKV | 25.85 | 40.56 | 41.20 |\\n| PyramidKV | 25.95 | 38.51 | 40.88 |\\n| CAKE (ours) | **26.56** | **41.30** | **42.62** |\\n|**Cache budget = 1024L**|\\n| StreamingLLM | 27.32 | 38.74 | 41.86 |\\n| H2O | 28.76 | 43.89 | 44.25 |\\n| TOVA | 29.10 | 46.47 | 45.03 |\\n| SnapKV | 29.60 | 47.17 | 45.20 |\\n| PyramidKV | 29.84 | 46.73 | 45.08 |\\n| CAKE (ours) | **29.98** | **47.59** | **45.83** |\\n\\nAcross different model sizes, CAKE consistently outperforms other methods, and notably, under the 1024L setting, CAKE even achieves better performance than full-cache settings for both Llama2-13B and Llama3-70B. We have further validated our method's effectiveness on **additional model architectures**, including Qwen and Gemma, with detailed results available in **Appendix F.2**.\\n\\n> **Q2:** Could this technique potentially help transformers stay competitive against RNN-based models like Mamba?\\n\\nYes, CAKE could potentially help transformers remain competitive against RNN-based models like Mamba by addressing several critical challenges:\\n\\n1. **Memory Efficiency:** While Mamba achieves linear memory scaling, CAKE significantly reduces the Transformer's memory footprint by maintaining a fixed KV cache budget without sacrificing performance. Additionally, CAKE is compatible with efficient attention mechanisms such as FlashAttention, further improving inference efficiency.\\n\\n2. **Decoding Speed:** CAKE enables faster decoding for long sequences (up to 10x speedup for 128K sequences) through optimized cache management, narrowing the speed gap with Mamba.\\n\\n3. **Attention Capabilities:** A key advantage of transformer-based models over Mamba lies in their stronger ability to handle complex contexts. CAKE preserves this strength while enhancing efficiency.\\n\\nWhile CAKE makes LLMs more efficient for generation, our analysis reveals that attention mechanisms can exhibit redundancy, as only a subset of information is required for effective inference. This insight suggests the potential for combining the strengths of both architectures to develop a more efficient hybrid framework.\"}",
"{\"title\": \"Response to reviewer BrWg\", \"comment\": \"Thank you for your encouraging feedback and support of our research. We sincerely appreciate your recognition and the time you dedicated to reviewing our work!\"}",
"{\"summary\": \"This paper proposes Cascading and Adaptive Key-value cache Eviction (CAKE) method for optimizing Key-Value cache evicting in large language models. Specifically, CAKE assesses each layer\\u2019s KV cache needs by considering attention dynamics in both spatial and temporal dimensions. During the prompt prefilling, CAKE allocates rational cache size for layers by analyzing layer-specific KV cache preferences and manages the memory budgets with the guidance of these preferences in a cascading manner. Besides, CAKE introduces a novel eviction indicator that accounts for both the long-term influence and temporal variability of token importance. Extensive experiments demonstrate CAKE\\u2019s superior performance across different models and memory constraints, especially in low-memory scenarios.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"(1)\\tThis paper provides a novel and practical key-value cache eviction approach to enhance LLM\\u2019s proficiency in handling long sequences, based on layer-specific KV cache preferences in a cascading manner.\\n\\n(2)\\tThe paper is well-organized, and the writing is clear. In particular, Figure 2 clearly points out the differences between the proposed CAKE and other existing models, and I find almost no typos in this paper.\\n\\n(3)\\tThe theoretical analysis (e.g. Theorem 2) rigorously demonstrates the equivalent KV cache eviction results of the proposed preference-guided cascading cache management to the vanilla preference-prioritized adaptive allocation strategy.\\n\\n(4)\\tThe extensive experiments on several open-source LLM benchmarks truly validate the effectiveness of the proposed algorithms.\\n\\n(5)\\tOverall, the proposed algorithm CAKE model is novel, practical and efficient. The corresponding experimental results are extensive and sound.\", \"weaknesses\": \"(1)\\tThe abbreviation \\u201cKV\\u201d in the title and abstract (e.g. line 12, or the second line of abstract) should be clearly written as \\u201cKey-value\\u201d, as this word appears for the first time in the whole paper.\\n\\n(2)\\tIn line 23, the abstract section says \\u201cthis approach allows for a global view of cache size allocation, distributing resources OPTIMALLY\\u201d. The word \\u201coptimally\\u201d is somewhat controversial, until you can demonstrate rigorously from a theoretical point that the proposed resources distribution method is optimal (with respect to certain theoretical property). Therefore, I would suggest to use less controversial word, like \\u201cadaptively\\u201d.\\n\\n(3)\\tEquation (4) is a little confusing for me. Since $A[i,:]$ is a row vector, $\\\\log{A[i,:]}$ is also a row vector, then is $ A[i,:] \\\\log{A[i,:]}$ the inner product of these two vectors? Should give more clear explanations.\\n\\n(4)\\tI read the proofs line by line, and according to my experience, the proof is sound. However, since the proof of Theorem 1 is truly basic and short, I believe it will be better to describe \\u201cTheorem 1\\u201d as a proposition. Besides, in Theorem 1, \\u201cFor layer $l \\\\in [L]$\\u201d, does it mean for any (fixed) layer, or mean there exists a layer? More explanations should be clarified.\\n\\n(5)\\tIn Table 1 of the experimental results, the proposed CAKE obtains SOTA results in most benchmark datasets. But in some benchmarks, some existing models like TOVA and SnapKV can achieve better results. Could you give more analysis in which kind of benchmark datasets can CAKE obtain better experimental results than the existing baselines?\", \"questions\": \"In Table 1, CAKE could not outperform existing methods on some benchmarks. Could you give more analysis in which kind of benchmark datasets can CAKE obtain better experimental results than the existing baselines? Or in what situations, CAKE will fail to achieve good results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer zgXb, part1\", \"comment\": \"We greatly appreciate your thorough review and detailed suggestions. Addressing your comments has helped us strengthen both the technical content and clarity of our presentation. Below we address your concerns point by point. Corresponding modifications in the paper are **highlighted in blue**.\\n>#### **W1**: The motivation for using spatial dispersion and temporal shift.\", \"our_approach_is_motivated_by_a_key_observation\": \"attention patterns exhibit significant variations across layers, models, and contexts, as demonstrated by our extensive visualizations in **Appendix J**. This characteristic makes it suboptimal to employ fixed or uniform cache allocation strategies. Through our analysis, we identify two crucial aspects of attention patterns: spatial dispersion and temporal shift. Spatially, we observe that some layers distribute attention broadly across tokens (Fig. 1(a)), while others concentrate on specific tokens (Fig. 1(b)). Layers with broader attention dispersion, when compared with full cache settings, require larger cache sizes to maintain their performance. However, spatial patterns alone cannot fully capture attention dynamics. Temporally, we find that some layers shift their attention focus across different tokens during different steps (Fig. 1(c)), while others maintain fixed attention to tokens (Fig. 1(d)). Layers with dynamic temporal patterns need larger cache allocations to effectively track these changes. Given these observations, we propose that effective cache allocation must consider both spatial dispersion and temporal shift to accurately measure each layer's cache requirements. This comprehensive approach enables CAKE to adaptively allocate resources based on layer preferences, adapting to varying attention patterns in different models and contexts.\\n\\nIndeed, the above analyses have been provided in **67-75 lines** of the submitted paper. Please kindly refer to it. We trust that this explanation adequately addresses your concerns. Should you require additional clarification, please do not hesitate to inform us, as we are more than willing to provide it.\\n\\n\\n>#### **W2**: Table 2 shows that the adaptive allocation strategy provides minimal improvement.\\n\\nTo better demonstrate the effectiveness of our adaptive allocation strategy, we have conducted additional experiments across multiple models, comparing three allocation strategies: uniform allocation, pyramid allocation, and our preference-prioritized adaptive allocation (P2A). The table below presents average scores across 16 LongBench datasets with a total budget size of 128L:\\n\\n| Model | Uniform | Pyramid | P2A (Ours) |\\n|-------|----------|----------|------------|\\n| LLama2-7B-Chat | 28.36 | 28.69 | **29.29** |\\n| Mistral-7B-Instruct | 36.10 | 35.65 | **37.33** |\\n| Gemma | 31.60 | 30.84 | **32.38** |\\n| Qwen2.5 | 40.43 | 38.02 | **41.68** |\\n\\nCompared with uniform allocation, pyramid allocation only shows modest improvements on the Llama2 model, it actually suffers performance degradation on other models. This demonstrates a key limitation of fixed-pattern allocation strategies: they rely on prior observations that may not generalize across different model architectures due to varying attention patterns. In contrast, our P2A strategy consistently outperforms both uniform and pyramid allocation across all tested models. This consistent improvement stems from P2A's ability to effectively measure layer-specific cache preferences by analyzing both spatial and temporal characteristics of attention patterns, enabling it to adaptively allocate appropriate cache sizes to corresponding layers.\\n\\n>#### **W3**: Compatibility with Flash-Attention. \\n\\nWe appreciate the reviewer's concern but want to clarify that CAKE is directly implemented on top of Flash-Attention. \\n\\nWhile implementing our KV cache eviction method, we still use Flash-Attention's \\\"`_flash_attention_forward`\\\" for full attention computations. The only additional computation needed is to obtain attention weights from the observing window $\\\\mathbf{A}[-S_w:,:]$ for calculating preference scores and eviction indicators, which can be efficiently computed via $\\\\mathbf{A}[-S_w:,:] = \\\\text{Softmax}(\\\\frac{\\\\mathbf{Q}[-S_w:,:]\\\\mathbf{K}^T}{\\\\sqrt{D}})$. For a 32K input sequence, this local attention computation introduces negligible overhead (0.1% of full attention). Therefore, CAKE fully preserves the efficiency benefits of Flash-Attention while providing additional memory optimization through adaptive cache management. To better illustrate the compatibility, we provide a detailed PyTorch-style implementation with Flash-Attention in **Appendix C**, Listing 1.\"}",
"{\"summary\": \"A novel technique that solves KV cache management to improve computational efficiency considering spatial and temporal attention dynamics. 10\\u00d7 faster decoding for extended sequences. Layer-wise memory budget allocation.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"S1. Identified and visualized the attention dynamics in spatial and temporal axis\\n\\nS2. Strong empirical analysis on multiple datasets\\n\\nS3. Actual improvement in inference latency\\n\\nS4. Excellent presentation (great paper flow and beautiful figures)\", \"weaknesses\": \"I do not identify any major weaknesses of the paper.\", \"questions\": \"Q1. I'm just curious how this technique could scale to larger models (no need to verify it empirically)\\n\\nQ2. Could this technique potentially help transformers stay competitive against RNN-based models like Mamba?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for the rebuttal.\", \"comment\": \"Thank you for including additional experiments on larger LLMs and I will keep my positive evaluation of your work. I encourage the authors to update their manuscript by incorporating highlights of their empirical improvements into the main body of the paper based on their response to the second question.\"}",
"{\"title\": \"Response to reviewer LJXW\", \"comment\": \"We sincerely appreciate your positive assessment of our work. Your valuable feedback has helped us improve the quality and completeness of our paper. We have carefully addressed each of your comments below, with the corresponding modifications **highlighted in blue** in the revised manuscript.\\n\\n> **W1:** Most empirical analyses focus on smaller LLMs with 7B-8B parameters, which may limit the generalizability of this approach for much larger LLMs.\\n\\nTo demonstrate the generalizability of our approach across different model sizes, we have conducted additional experiments on larger models including Llama2-13B-Chat, Qwen2.5-32B-Instruct, and Llama3-70B-Instruct. We evaluated both low-budget ( cache size = $128L$) and high-budget (cache size = $1024L$) settings on Longbench. The table below presents the average scores across 16 datasets (detailed results are provided in **Appendix F.3**).\\n| Method | Llama2-13B-Chat | Qwen2.5-32B-Instruct | Llama3-70B-Instruct |\\n|:---:|:---:|:---:|:---:|\\n| Full Cache | 29.95 | 48.39 | 45.79 |\\n| **Cache size = 128L**|\\n| StreamingLLM | 22.75 | 33.33 | 37.53 |\\n| H2O | 25.15 | 38.96 | 40.13 |\\n| TOVA | 25.01 | 40.08 | 34.76 |\\n| SnapKV | 25.85 | 40.56 | 41.20 |\\n| PyramidKV | 25.95 | 38.51 | 40.88 |\\n| CAKE (ours) | **26.56** | **41.30** | **42.62** |\\n|**Cache size = 1024L** |\\n| StreamingLLM | 27.32 | 38.74 | 41.86 |\\n| H2O | 28.76 | 43.89 | 44.25 |\\n| TOVA | 29.10 | 46.47 | 45.03 |\\n| SnapKV | 29.60 | 47.17 | 45.20 |\\n| PyramidKV | 29.84 | 46.73 | 45.08 |\\n| CAKE (ours) | **29.98** | **47.59** | **45.83** |\\n\\nAs shown in the results, CAKE consistently outperforms baseline methods across different model sizes. Under constrained memory conditions (cache size = $128L$), CAKE demonstrates significant advantages over other methods. These advantages are maintained with larger cache sizes ($1024L$), where CAKE even achieves slightly better performance than full-cache settings for some models (Llama2-13B: 29.98 vs 29.95, and Llama3-70B: 45.83 vs 45.79). These results indicate that our approach scales effectively to larger models while maintaining its efficiency advantages.\\n\\nAdditionally, we have validated CAKE's effectiveness on **two more model architectures**, including Qwen and Gemma, further demonstrating the generalizability of our proposed method. Detailed results can be found in **Appendix F.2**.\\n\\n> **W2:** The empirical improvements over existing baselines are relatively modest, which could suggest limited practical advantages in some cases.\\n\\nOur initial submission might show modest improvements in some aspects. However, by incorporating our new experiments to address **W1**, we wish to emphasize that CAKE offers significant practical benefits in four critical areas:\\n\\n1. CAKE consistently outperforms baselines across different model architectures (Llama, Mistral, Qwen, Gemma) ranging from 7B to 70B parameters on LongBench across all memory settings (**Appendix F**).\\n\\n2. CAKE shows significant advantages in memory-constrained settings. For example, with cache size $64L$ on Mistral, CAKE achieves 34.31 average score on LongBench versus SnapKV's 31.31 and PyramidKV's 30.50 (Table 7, **lines 1147-1149**), highlighting its efficiency in resource-limited scenarios. Similar cases can be also found in other models.\\n\\n3. CAKE not only narrows the gap with full cache but sometimes surpasses it while maintaining minimal memory, as demonstrated on Gemma-7B, Llama2-13B, and Llama3-70B with only $1024L$ cache size.\\n\\n4. CAKE significantly outperforms existing methods on challenging Multi-Retrieval tasks. For example, with $1024$ cache size on Mistral, CAKE achieves 71.00 versus SnapKV's 56.93 (Table 10, lines **1340-1342**) and maintains 48.55 accuracy versus SnapKV's 19.14 (Table 10, **lines 1344-1346**) with $512$ cache size. More similar cases can be found in **Appendix G**.\"}"
]
} |
EQZMx8Lc0n | RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates | [
"Md Kowsher",
"Tara Esmaeilbeig",
"Chun-Nam Yu",
"Mojtaba Soltanalian",
"Niloofar Yousefi"
] | We propose RoCoFT, a parameter-efficient fine-tuning method for large-scale language models (LMs) based on updating only a few rows and columns of the weight matrices in transformers. Through extensive experiments with medium size LMs like BERT and RoBERTa, and larger LMs like Bloom-7B, Llama2-7B, and Llama2-13B, we show that our method gives comparable or better accuracies than state-of-art PEFT methods while also being more memory and computation- efficient. We also study the reason behind the effectiveness of our method with tools from neural tangent kernel theory. We empirically demonstrate that our kernel, constructed using a restricted set of row and column parameters, are numerically close to the full-parameter kernel and gives comparable classification performance. Ablation studies are conducted to investigate the impact of different algorithmic choices, including the selection strategy for rows and columns as well as the optimal rank for effective implementation of our method. | [
"RoCoFT",
"Parameter-efficient finetuning",
"LLMs",
"Neural Tangent Kernel"
] | https://openreview.net/pdf?id=EQZMx8Lc0n | https://openreview.net/forum?id=EQZMx8Lc0n | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xpXNUABeiF",
"wXZp8Vn537",
"wSB7GqKdHz",
"voSORINiXd",
"t0Xi1wu3ei",
"sR9yG0DC7Q",
"rppdBY1yDF",
"p2rmP6aCGj",
"oDGl5GtGcQ",
"jmjJbb0Ydm",
"jg88qvJhcX",
"iZqM7kZwNK",
"gAlmpmSZy1",
"fFEkwhT6nA",
"dINdGDdYSR",
"bfZzldmrrE",
"bD4kvGxWrK",
"WUsyLKWoNf",
"UduCrNzpHu",
"Ua9h3QIsNd",
"UUwOJTe95d",
"JV80EMfPpQ",
"FOfUND8WQu",
"7lh5bpfvtq",
"6JPuXK2jjn",
"5FlHD2Jupl",
"5E6dqMeSRU",
"2fvV3NMbm3",
"0WnbuHvaAh"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1733178716030,
1732308001784,
1733120586412,
1730490141542,
1733003337890,
1732309918090,
1733223285618,
1729086780577,
1733178628863,
1732310550832,
1732309311111,
1730211196863,
1732522678103,
1732919126839,
1737905375511,
1732920144786,
1732308892179,
1732401686744,
1733178326358,
1732402376230,
1732526808889,
1733202093467,
1732307956801,
1733002995668,
1733217611250,
1732310046806,
1732401522447,
1732401117769,
1730468554975
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Reviewer_34VR"
],
[
"ICLR.cc/2025/Conference/Submission8554/Reviewer_Hy7a"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Reviewer_ZG6N"
],
[
"ICLR.cc/2025/Conference/Submission8554/Reviewer_ZG6N"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Reviewer_34VR"
],
[
"ICLR.cc/2025/Conference/Submission8554/Reviewer_cVGh"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Reviewer_34VR"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Reviewer_cVGh"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8554/Reviewer_cVGh"
]
],
"structured_content_str": [
"{\"comment\": \"Thank you for the insightful comments and raising your evaluation score.\"}",
"{\"title\": \"Response to questions\", \"comment\": \"## Q1. \\\"Avg.\\\" column of Table 1\\nThe two values in the \\\"Avg.\\\" column of Table 1, presented as 'a' or 'a/b', are computed as separate averages based on the two sets of metrics reported in the table. The first value, 'a', represents the average of the first metric (e.g., MCC, Accuracy, etc.) across all tasks, while the second value, 'b', represents the average of the second metric (e.g., F1 score or another secondary metric) across all tasks. We understand that this column was confusing therefore we removed it in the revised manuscript. Additionally, we edited all captions to reflect what metrics 'a/b' are. \\n\\n## Q2. Simultaneous Row-Column updates\\nThank you for this suggestion. While it is possible to update both rows and columns simultaneously, doing so presents practical challenges in our current implementation. Currently, we split each weight matrix into two parts: trainable and non-trainable, based on either rows or columns. Maintaining this partitioning while updating both rows and columns introduces overlap in the trainable parameters, which complicates the setup and management of the updates.\\n\\nOne approach we explored was to use masking to manage simultaneous updates, as described in Appendix D (\\\"RoCoFT with Random Weight Selection\\\"). However, this method did not achieve meaningful memory reduction because the masking operation requires additional memory allocation and storage for the mask itself, akin to the inefficiencies observed with random selection strategies.\\n\\nIn future work, we aim to devise a more efficient strategy for jointly controlling and updating both rows and columns, ensuring minimal overlap and optimal memory efficiency while maintaining the performance benefits of this combined approach.\"}",
"{\"comment\": \"Thank you for your clarifications. I raise my score.\"}",
"{\"summary\": \"The paper introduces RoCoFT, a parameter-efficient fine-tuning (PEFT) method designed for large language models (LLMs) that updates only a subset of rows and columns in transformer weight matrices. This approach aims to retain model accuracy while reducing memory and computational requirements compared to traditional fine-tuning methods. RoCoFT achieves state-of-the-art or comparable results on tasks like GLUE, question answering, and summarization, as well as on benchmarks requiring common sense and mathematical reasoning. The authors analyze the method\\u2019s effectiveness through neural tangent kernel (NTK) theory, showing that kernels from RoCoFT are numerically close to full-parameter kernels, suggesting that fine-tuning a limited parameter subset preserves core model knowledge.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The presentation is clear, and the paper is easy to follow, with only a few minor typos.\", \"The proposed method, RoCoFT, is straightforward and demonstrates strong empirical performance.\", \"The results are reported across multiple tasks and base models, evaluated using various metrics, including memory usage, computation time, and accuracy. This is a good plus to the paper.\"], \"weaknesses\": \"- **Lack of Related Work Discussion**: One weakness of this paper is the limited scope of its related work discussion, focusing primarily on low-rank methods (e.g., LoRA). However, RoCoFT has a closer methodological resemblance to pruning and sparse fine-tuning methods, which are underrepresented in this review. In the parameter-efficient fine-tuning (PEFT) field, methods generally fall into either low-rank or subset of trainable parameter categories, so a more comprehensive comparison should include subset of trainable parameters finetuning baselines (or sparse fine-tuning), such as [1-8]. Adding a discussion of these methods in the related work section would strengthen the contextual foundation of this paper.\\n\\n- **Need Additional Novelty Clarification**: The paper lacks a detailed discussion of how RoCoFT differs from existing sparse PEFT methods, such as those presented in [1-8]. \\n\\n- **Lack of Baseline Comparisons**: While RoCoFT has similarities with pruning and sparse fine-tuning techniques, the paper currently lacks direct baseline comparisons to these methods. Including baselines from sparse fine-tuning methods in the experiments would offer a more balanced evaluation of RoCoFT's performance and efficiency.\\n\\n- **Inclusion of More SOTA Models**: The experiments include recent models like DeBERTaV3 and LLaMA-2, which is commendable. However, the study would be more persuasive if it also incorporated newer state-of-the-art models (e.g., Llama3-8B, Llama3.1, Minstrel) to reflect the rapidly advancing field of pre-trained model performance.\\n\\n- **Typos**:\\n\\\"prevailing paradiagm\\\" should be corrected to \\\"prevailing paradigm\\\".\\n\\\"state-of-art\\\" should be \\\"state-of-the-art\\\".\\n\\\"massive amount of text\\\" should be \\\"massive amounts of text\\\".\\n\\\"signficant savings\\\" should be \\\"significant savings\\\".\\n\\n- **Clarity of Baseline Model in Figures**: In Figure 2 and Figure 3 of Section 4, the efficiency comparisons are unclear because the base model for fine-tuning (used to report memory and time costs) is not specified. Similarly, Figure 5 lacks clarity on which base model was used for reporting average accuracy across different metrics. Including these model details would improve transparency in the experimental setup.\\n\\n[1] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks\\n\\n[2] Parameter-Efficient Fine-Tuning without Introducing New Latency\\n\\n[3] Sparse Matrix in Large Language Model Fine-tuning\\n\\n[4] Parameter-Efficient Transfer Learning with Diff Pruning\\n\\n[5] Training Neural Networks with Fixed Sparse Masks\\n\\n[6] Scaling Sparse Fine-Tuning to Large Language Models\\n\\n[7] Composable Sparse Fine-Tuning for Cross-Lingual Transfer\\n\\n[8] Diff prunning: Parameter-Efficient Transfer Learning with Diff Pruning\", \"questions\": \"- **Discussion on Fisher Information**: Reference [5] uses empirical Fisher information to select the most efficient parameters for fine-tuning. It would be beneficial if the authors discussed the efficiency of this method relative to RoCoFT, as this comparison could highlight RoCoFT\\u2019s strengths and potential trade-offs.\\n\\n- **Memory Cost Clarification**: In fig.2, the author reports the memory cost for baselines and RoCoFT. However, the results are not easy to follow/understand. In LLM, since Adam optimizer is the most common optimizer, the memory cost for full Adam optimizer will be 2 times than the model weights. For instance In Llama-2-7B model, the model weight is 13.6G and the optimizer will cost 2*13.6GB. However, LoRA can reduce the optimizer memory cost to less than 1%. In Fig.2, the authors report the memory cost for RoCoFT and LoRA is still 2 times than the model weight, can you kindly discuss why is that?\\n\\n- **percentage of trainable parameters**: In PEFT field, papers usually use percentage of trainable parameters to present the algorithm efficiency. Since in Figure2, Figure3 of Section4, efficiency comparison, the author didn\\u2019t clarify what is the fine-tuning base model author used to report all the memory and time cost. It\\u2019s also different to find In figure what is the fine-tuning base model author used to report the average accuracy for different metrics in figure 5. Can the author discuss the percentage of trainable parameters they use?\\n\\n- **Implementation for Memory Reduction**: Low-rank methods like LoRA use additional trainable adapters, while sparse fine-tuning often applies binary masks to reduce memory. It would strengthen the paper if the authors elaborated on how the paper implement RoCoFT to achieve memory reduction and speedup compared to these existing techniques, and discuss it from the aspect of system. Do RoCoFT need to implement full forward and backward propagation for all parameters? Do RoCoFT will introduce more modules during the fine-tuning process? \\n\\nI would like to discuss the questions I raised regarding the weaknesses and concerns with the authors. If my concerns are adequately addressed, I would be willing to reconsider my rating.\\n\\n**References**: \\n\\n[1] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks\\n\\n[2] Parameter-Efficient Fine-Tuning without Introducing New Latency\\n\\n[3] Sparse Matrix in Large Language Model Fine-tuning\\n\\n[4] Parameter-Efficient Transfer Learning with Diff Pruning\\n\\n[5] Training Neural Networks with Fixed Sparse Masks\\n\\n[6] Scaling Sparse Fine-Tuning to Large Language Models\\n\\n[7] Composable Sparse Fine-Tuning for Cross-Lingual Transfer\\n\\n[8] Diff Pruning: Parameter-Efficient Transfer Learning with Diff Pruning\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear reviewer ZG6N,\\n\\nWe are grateful for your constructive feedback on NTK regression section, which led us to further investigate the comparison between LoRA, RoCoFT and full fine-tuning through the perspective of the NTK. We would be happy to address any remaining concerns you may have regarding the revised manuscript and the new experimental results. Please feel free to provide further comments or suggestions, and we will make every effort to incorporate them. We look forward to your feedback.\\n\\nSincerely,\\n\\nAuthors\"}",
"{\"title\": \"Response to weakness points 1-3\", \"comment\": \"We thank the reviewer for their detailed feedback recommendations for our work. We will be happy to discuss any addtional questions the reviewer may have.\\n\\n## W1. Related Work Discussion\\nThank you for highlighting the need for a broader discussion of related work. We agree that while RoCoFT shares similarities with low-rank methods like LoRA, it also has similarities with pruning and sparse fine-tuning techniques\\nWe will expand the related work section to include more relevant papers from the sparse fine-tuning and pruning categories, as suggested. We appreciate your guidance and references, which will help in strengthening the contextual foundation of the paper.\\n\\n## W2. Novelty Clarification\\nWe thank the reviewer for the suggested list of references. We have indeed overlooked the discussion on sparse fine-tuning methods in our literature review in related works. Below is our view on how our method relates to the sparse fine-tuning methods listed in [1-8], which we will also include in the updated related works section in our paper. Thank you so much for helping us improve this aspect of our paper. \\n\\n\\\"Apart from low-rank adaptor methods Sparse Fine-Tuning is another group of PEFT methods that focuses on directly training only a very small subset of model parameters during finetuning. Sparsity can be achieved in two different ways, either by pruning after full finetuning or by selecting a sparse set of masks to train before finetuning. Diff pruning[4/8] encourages sparsity by using L0-norm regularization during finetuning, while [7] makes use of the Lottery Ticket Hypothesis[1] to prune the weights after full finetuning. Unlike our proposed method they both require computational costs close to full finetuning. [3] selects submatrix blocks as masks using maximal gradient change during warmup as criterion, while [5] selects masks based on Fisher information. Both require some precomputation before a sparse mask can be selected for finetuning. [2] selects unimportant weights for task-agnostic finetuning while [6] propose a finetuning method that can adaptively grow or shrink the sparsity pattern during finetuning. Unlike our method which uses only rows and columns, these sparsity masks can be unstructured patterns and less efficient in actual implementation. Our method can be seen as belonging to both low-rank adaptor methods and sparse fine-tuning, as with few rows or columns chosen the updates are naturally both low-rank and sparse.\\\" \\n\\n## W3. Baseline Comparisons\\nWe thank you for the valuable feedback. We recognize the importance of comparing RoCoFT directly with sparse fine-tuning baselines to present a more balanced evaluation of its performance and efficiency, though we have tried to compare it with some recent state-of-the-art methods.\\n\\nIn response to your suggestion, we have included a new comparison with recent works, including sparse fine-tuning methods. Additionally, we will incorporate further baseline comparisons with well-established sparse fine-tuning methods in our experiments to strengthen the evaluation.\\n\\n| Dataset | LoRA-XS[9] | Vera[10] | LoRAFA[11] | SFT[12] | Diff Pruning[8] | FSM[1] | RoCoFT (row) | RoCoFT (column) |\\n|---------|------------|---------|-----------|--------|-----------------|--------|--------------|-----------------|\\n| SST2 | 93.19 | 93.89 | 93.65 | 94.28 | 93.77 | 94.11 | 94.92 | 94.69 |\\n| CoLA | 58.49 | 60.35 | 60.49 | 64.45 | 62.45 | 62.77 | 63.53 | 62.95 |\\n| MNLI | 85.34 | 85.64 | 86.11 | 86.64 | 85.32 | 85.85 | 86.73 | 86.76 |\\n| QNLI | 90.42 | 90.22 | 91.42 | 92.11 | 92.14 | 91.81 | 92.12 | 91.89 |\\n\\n\\n[1] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks\\n\\n[2] Parameter-Efficient Fine-Tuning without Introducing New Latency\\n\\n[3] Sparse Matrix in Large Language Model Fine-tuning\\n\\n[4] Parameter-Efficient Transfer Learning with Diff Pruning\\n\\n[5] Training Neural Networks with Fixed Sparse Masks\\n\\n[6] Scaling Sparse Fine-Tuning to Large Language Models\\n\\n[7] Composable Sparse Fine-Tuning for Cross-Lingual Transfer\\n\\n[8] Diff Pruning: Parameter-Efficient Transfer Learning with Diff Pruning\\n\\n[9] LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters.\\n\\n[10] Vera: Vector-based random matrix adaptation.\\n\\n[11] Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning.\\n\\n[12] Scaling Sparse Fine-Tuning to Large Language Models\"}",
"{\"comment\": \"Thanks for your clarification, which addressed most of my concerns. I will raise my score to 6.\", \"just_an_additional_comment\": \"I think there should be some deep connections between RoCoFT and LoRA with rank = 1. Essentially, let B and A in LoRA be of dimensions $d\\\\times 1$ and $1\\\\times d$, and restrict A to be non-learnable with only one value in one position and zero in the other positions. Then, this type of restricted LoRA incremental matrix should reduce to your column-update scheme. A similar argument should hold for the row-update scheme. Therefore, I believe the RoCoFT method may also be explained from this perspective and could be potentially explored in future work.\"}",
"{\"summary\": \"The authors propose a novel method named RoCoFT for parameter-efficient fine-tuning (PEFT). RoCoFT updates only a few rows or columns of the trained parameter matrices, achieving even lower complexity compared to existing PEFT methods. The effectiveness of RoCoFT is supported by neural tangent kernel (NTK) theory, as demonstrated by the authors. The empirical performance of RoCoFT is extensively evaluated on several benchmarks and compared with a large number of baselines.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The method is simple, straightforward, yet effective. The presentation is clear and easy to follow.\\n\\n2. The performance comparison with baselines is extensive. Besides, the learnable parameters in RoCoFT are much less than existing methods, which is very useful.\\n\\n3. As shown in ablation studies, the strategy of choosing rows and columns is robust and does not need much tuning.\", \"weaknesses\": \"1. The NTK analysis in Section 5 is not complete. The results in Tables 5 and 6 only include comparisons between RoCoFT, FT, and the pre-trained weights. However, if other methods, such as LoRA, also have a kernel that is empirically close to the full-parameter kernel, it becomes unclear why RoCoFT can achieve performance improvements over them. Similar experiments on other baselines should also be included.\\n2. Further explanation should be provided on why the few-shot learning performance is used as a downstream task for kernel comparison in Tables 5 and 6. Why are the performances in Tables 1, 2, and 3 not used for kernel comparison?\\n3. The empirical improvement in memory costs in Figure 2 and training time costs in Figure 3 appears marginal, which is inconsistent with the large improvement suggested by Table 4. Please provide a detailed explanation.\", \"questions\": \"1. How are the two values in the \\\"Avg.\\\" column computed in Table 1?\\n\\n2. Can the row update and column update be used simultaneously? It seems to me that this simple strategy allows for more flexibility and enhanced performance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear reviewer ZG6N,\\n\\nKindly, please let us know of any remaining concerns about our paper. We will be happy to address them.\\n\\nSincerely, \\n\\nAuthors.\"}",
"{\"title\": \"Response to questions\", \"comment\": \"## Q1. Fisher Information\\nReference [5] uses Fisher information to select the most important features for finetuning, and other works like [7,3] try to select the most important features to finetune via weight magnitudes or largest gradient change. \\nOn the other hand, reference [2] takes a different view and selects the unimportant(unused) features for finetuning to different tasks. Interestingly both approaches work on the typical benchmark datasets considered. This is corroborated by our ablation studies on the choice of rows or columns to finetune, which we showed relatively little difference in choosing the most important or most unimportant rows/columns to finetune when scored by the pruning criterion used in WANDA. Therefore we don't make any effort in trying to select the \\\"best\\\" rows/columns to finetune in RoCoFT and just take the first few rows/columns in the weight matrices. This is the main difference with methods that selects the best features to finetune using criterions like Fisher information. It is possible for extreme cases of sparsity (e.g. much fewer than the ~1M trainable parameters in Table 1) that these sparse fine-tuning methods based on feature/masks selection can outperform the feature/mask-agnostic approach used in RoCoFT, but for typical benchmark datasets we do see little difference. \\n\\n## Q2. Memory Cost Clarification\\nThank you for this insightful feedback. In Figure 2, we used the same experimental setup as in Table 1, where RoCoFT is set to rank 1, while other LoRA-based methods are at rank 2, resulting in LoRA\\u2019s rank being effectively 20 times that of RoCoFT. For a fairer comparison, we then adjusted all LoRA-type methods to rank 10 and found that the memory costs were approximately 3.79 GB for LoRA and 3.57 GB for RoCoFT.\\n\\nWe updated Figure 2 to reflect this fair comparison, showing memory costs after one epoch of full training using SST-2. This will clarify the memory cost differences across methods under comparable conditions.\\n\\n## Q3. Percentage of trainable parameters\\nThank you for your feedback. In our paper, we reported the total trainable parameters rather than a percentage. The reason is that RoCoFT does not introduce additional adapter parameters, whereas other methods require adapters that increase the total number of parameters, thus altering the percentage of trainable parameters. Since adapter size varies across methods, using total trainable parameters allows for a more direct and consistent comparison of algorithm efficiency.\\n\\nTo address the base model clarity in Figures 2, 3, and 5, we used RoBERTa-base for fine-tuning and will update the figure captions accordingly to ensure transparency in our experimental setup.\\n\\n## Q4. Implementation for Memory Reduction\\n Thank you for your questions regarding memory reduction and speedup in RoCoFT. Our implementation achieves memory efficiency without introducing additional modules, as we only update a subset of parameters within the existing model structure. Specifically, RoCoFT replaces the layers in the pretrained model with custom modules that update selected rows or columns, managed with nn.Linear(). We split the weights into trainable and non-trainable portions. Only a subset (rank k) of the original weight matrix is marked as trainable, while the remaining parameters are moved to a buffer (non-trainable). This avoids memory overhead from additional trainable adapters or binary masks and enables parameter-specific updates without dynamically creating new tensors. The non-trainable weights are detached and stored in buffers to save memory, ensuring no gradients are computed or stored for these weights. This approach allows us to avoid the full memory cost typically incurred with trainable adapters like in LoRA. During training, RoCoFT does not require full forward and backward propagation for all parameters. Instead, the concatenation of trainable and non-trainable weights is performed once and stored, so the model does not introduce additional computational modules or overhead during the fine-tuning process.\"}",
"{\"title\": \"Response to questions\", \"comment\": \"## Q1. Performance with updating only classification head\\nThank you for your suggestion. We conducted an additional experiment where no weights in the pretrained model were updated, and only the classification head was trained. We compared this with the single-column and single-row adaptations of RoCoFT. The results for the SST-2 and MNLI datasets are as follows:\\n| Dataset | Classification Head Only | Single-Column Adaptation | Single-Row Adaptation |\\n|---------|---------------------------|---------------------------|-----------------------|\\n| SST-2 | 88.29% | 93.88% | 94.06% |\\n| MNLI | 80.82% | 85.35% | 85.23% |\\n\\n## Q2. Hyper parameters\\n Thank you for pointing this out. To ensure consistency in evaluating baselines and our proposed method, we have followed the experimental setups described in Xu et al. (2023)[1] and Zhang et al. (2023a)[2], as referenced in Section 4. Additionally, we have provided detailed information about our environmental setup and implementation details in Appendix C.\\n\\nTo address any potential ambiguity, we clarified this in the revised version of the paper by explicitly stating that the hyperparameter settings for the baselines were aligned with those in these prior works, and we ensured that any setup differences are clearly outlined.\\n\\n[1] Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment\\n\\n[2] Adaptive budget allocation for parameter-efficient fine-tuning.\"}",
"{\"summary\": \"The authors proposed a simple fine tuning method for LLMs that updates only a few columns/rows in the base model. NTK regression-based analysis proposed to explain why single row/column updates work and extensive experiments were conducted to evaluate the method on diverse language tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The method is very simple but shows prominent results for some datasets\", \"The method was evaluated on large and diverse number of datasets\", \"Applying NTK regression to get explanation for why the method works - looks interesting\"], \"weaknesses\": [\"Limited novelty of the proposed method: the authors propose to update a few columns/rows in the base model and exploit the existing NTK regression method to explain it.\", \"I don\\u2019t understand how the results in Table 5 are consistent with Table 1 so we can explain why the method works with NTK regression. In Table 5 the proposed method performs worse than FT while in Table 1 it is not the case.\", \"In-place updates disable the behavior of the model as an adaptor. This is a trade-off that should be discussed while presenting 0 additional parameters\", \"Missing explanation / intuition why the method fails on some datasets, e.g. MNLI, QNLI, RTE in Table 1.\", \"Missing additional recent LoRA-style baselines with low number of trainable parameters, e.g. [1-3]\", \"The efficiency gains are not significant compared to other LoRA-style methods, also it is not interesting since the number of trainable parameters is small for adapter-like methods.\", \"[1] Ba\\u0142azy, Klaudia, et al. \\\"LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters.\\\" arXiv preprint arXiv:2405.17604 (2024).\", \"[2] Kopiczko, Dawid J., Tijmen Blankevoort, and Yuki M. Asano. \\\"Vera: Vector-based random matrix adaptation.\\\" arXiv preprint arXiv:2310.11454 (2023).\", \"[3] Zhang, Longteng, et al. \\\"Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning.\\\" arXiv preprint arXiv:2308.03303 (2023).\"], \"questions\": [\"I would like to see an experiment where no weights are updated in the pretrained model and only classification head is trained and how the obtained accuracy differs from the single-column/row adaptations.\", \"It is not clear from the Sec.4 if the setup of baselines in terms of hyper parameters is the same as of the proposed method. I\\u2019m concerned that the small differences in the evaluation between the proposed method and baselines stems from setup differences.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Repond to authors\", \"comment\": \"I thank the authors for their efforts to address my comments and improve my paper. However, I am afraid I still have several concerns.\", \"typos\": \"The paper still has writing issues, and it would also be much more convenient for the reviewers if the authors had highlighted the changes using some color.\", \"missing_space\": \"benchmarks(Jiang et al., 2024).\\nrows(columns)\\nFine-Tuning and Finetuning are used interchangeably\", \"bitfit\": \"It is surprising to me that the authors have not read relevant information from such a closely related work. Although I am not an author of BitFit, I have learned about its details and its relation to this paper since it was cited here. Consequently, I expect the authors to be knowledgeable about this related work.\\n\\nAs I understand from the authors, the main differences between their method and the alternative (or ablation) mentioned in Bit-Fit (which can also naturally scale to more than one row) are three elements: row mixing, layer norm update, and gradient mask. The original version of the paper did not mention any of these details, and even the current version fails to note the latter two as differences from BitFit.\\n\\nIn terms of credit, I do not see adequate acknowledgment given to the ideas already described in Bit-Fit, even if presented as a baseline (or ablation) and not presented as their main method, it was presented in their paper. I highly recommend that the authors revise some sections of the paper to emphasize how they improve upon this idea by identifying the key elements that make it work (including an ablation on all three elements). This is the proper way to convey scientific novelty and contribution.\\n\\nWhy are you choosing the first rows? Doesn't this contradict the entire row selection scheme, which indicates that random is as good as any other selection? I don't understand the issue with what you call row mixing. Is it that rows are mixed between layers?\\n\\nI would like to thank the authors for their corrections and for clarifying the NTK evaluation. While I find this clarification valuable, my main concerns about the paper remain. I think the underlying idea is interesting, and I believe that improving the presentation of the paper can greatly enhance its quality. I strongly recommend that the authors consider these suggestions to strengthen their work.\"}",
"{\"comment\": \"Dear reviewer cVGH\\n\\nThank you for the important points you raised.\\n\\n$\\\\bullet$ We choose the first rows because implementation-wise that\\u2019s the simplest. The ablation studies on row or\\ncolumn choice just show there are very little effects on finetuning performance among the different choices.\\nTherefore we just pick one that\\u2019s convenient for implementation.\\n\\n$\\\\bullet$ As for row mixing, we mean in the ablation\\nstudies of BitFit, they flip a random coin to decide whether a particular layer use row or column updates.\\nSo in their rand row/col experiment roughly 50% of their updates are with rows and 50% are with columns,\\nwhen considered across all layers. Our method on the other hand only consider row-only for all layers, or\\ncolumn-only for all layers, without mixing their use across layers.\\n\\n$\\\\bullet$ Some of the additional results and corrections were already added to the appendix and the main body of the paper. Since at this stage it is not possible to upload a revised manuscript with colored editions, we will\\ninclude the clarifications and replies on additional results in these discussions into the main paper once these\\ndiscussions are finalized.\\n\\nSincerely,\\n\\nAuthors\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"comment\": \"Dear reviewer 34VR\\n\\nThank you for raising these important points.\\n\\n$\\\\bullet$ We do not make major changes to the main manuscript yet due to page limit and the different additional results\\nrequested by the four reviewers. Some of the additional results were already added to the appendix. We will\\ninclude the clarifications and replies on additional results in these discussions into the main paper once these\\ndiscussions are finalized.\\n\\n$\\\\bullet$ As for the question of the importance of accuracy VS memory efficiency in the research of PEFT methods,\\nwe believe BOTH are very important. We think of the advancements of PEFT methods as pushing the Pareto\\nfrontier for accuracy and memmory efficiency for finetuning LLMs. Therefore when we look at the results like\\nthose in Tables 1-3, we don\\u2019t just consider the accuracy numbers, but also the number of trainable parameters.\\nAlso if we really care about accuracy, the best way to improve accuracy is to use more trainable parameters\\nin finetuning (e.g. increasing rank from 1 to 3), or a pretrained model of larger size. Having a more memory-\\nefficient method will enable us to employ larger models and more trainable parameters. And as for methods\\nthat compress or quantize the base models, we believe they are independent methods to promote memory\\nefficiency that can be used in conjunction with PEFT methods(e.g. QLoRA). The existence of compression or\\nquantization methods for LLMs does not make the research on memory efficiency in PEFT methods redundant.\\n\\n$\\\\bullet$ We are confident that in addtion to posposing the RoCoFT method, explaining the effectivity of fine-tuning method by means of NTK regression is the novelty of our paper. Following the insightful comments of reviewers, we added the NTK regression results on LoRA to the revised paper which shows that the NTK for LoRA with r = 1 is not as close\\nas the NTK for row/column parameters to the full parameter kernel. This view of PEFT as a kernel machine has an important impact on PEFT research. \\n\\nSincerely,\\n\\nAuthors\"}",
"{\"title\": \"Response to the weakness points\", \"comment\": \"We thank the reviewer for raising insightful points. Below, we address the concerns thoroughly.\\n\\n## W1. Limited novelty \\nThe novelty of our paper is twofold. First, to the best of our knowledge, the row-column update of the weight matrices despite being starkly low-complexity and showing competitive accuracies, is not proposed as a method in previous papers. Moreover, explaining the effectivity of fine-tuning method by means of NTK regression is at its infancy in the literature. We believe the NTK regression, when finetuning is lazy, is a powerful tool for analyzing the learning dynamics of finetuning.\\n\\n## W2. Consistency of Table 5 with Table 1 \\nWe are sorry for the confusion. There are actually several differences between Table 1 and Table 5. Table 1 is the comparison of our method RoCoFT with other PEFT methods and FT on standard benchmarks, while Table 5 is a comparison of FT against kernel regression using NTKs derived from full parameter set and the row/column parameter set under a few-shot learning setting. Table 1 finetunes on the whole training set while Table 5 uses few-shot learning because computing the NTKs on large training set is expensive. Also, kernel regression using NTKs on the full parameter set and the row/column parameter set in Table 5 are approximations to FT and our RoCoFT methods. The kernel regression performance is usually a little lower than actual finetuning with backpropagation. We are not advocating using kernel regression with NTKs to replace finetuning with backprop (whether FT or RoCoFT). We are just using kernel regression with NTKs to provide an independent view on why the performance of FT and RoCoFT can be close. \\n\\n## W3. Behavior of the model as an adaptor with In-place updates \\nActually in-place updates do not disable the use of our method as adaptors. Once RoCoFT is done we can compute the difference of the row/column updates with the corresponding original row/column values in the pretrained model, and store these differences as adaptors. This is the same as LoRA except LoRA express the adaptor as a difference added to the original weight matrix before finetuning while RoCoFT needs to do some postprocessing to obtain the corresponding adaptors. \\n\\n## W4. Performance on MNLI, QNLI, RTE datasets\\nFor the question on MNLI, QNLI, RTE in Table 1, perhaps we don't understand the question fully. The results of RoCoFT on these 3 datasets are competitive with the rest of the PEFT methods even when they are not the best among all the methods. The results for 1 row/column are weaker due to limited number of parameters, but the 3 row/column results are promising.\\n\\n## W5. Additional baseline methods\\nWe thank you for the valuable feedback. We recognize the importance of comparing RoCoFT directly with sparse recent LoRA-style baselines.\\n\\nIn response to your suggestion, we have included a new comparison with recent works, including sparse fine-tuning methods. Additionally, we will incorporate further baseline comparisons with well-established sparse fine-tuning methods in our experiments to strengthen the evaluation.\\n\\n| Dataset | LoRA-XS[1] | Vera[2] | LoRAFA[3] | SFT[4] | Diff Pruning[5] | FSM[6] | RoCoFT (row) | RoCoFT (column) |\\n|---------|------------|---------|-----------|--------|-----------------|--------|--------------|-----------------|\\n| SST2 | 93.19 | 93.89 | 93.65 | 94.28 | 93.77 | 94.11 | 94.92 | 94.69 |\\n| CoLA | 58.49 | 60.35 | 60.49 | 64.45 | 62.45 | 62.77 | 63.53 | 62.95 |\\n| MNLI | 85.34 | 85.64 | 86.11 | 86.64 | 85.32 | 85.85 | 86.73 | 86.76 |\\n| QNLI | 90.42 | 90.22 | 91.42 | 92.11 | 92.14 | 91.81 | 92.12 | 91.89 |\\n\\n\\n\\n[1] LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters.\\n[2] Vera: Vector-based random matrix adaptation.\\n[3] Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning.\\n[4] Scaling Sparse Fine-Tuning to Large Language Models\\n[5] Diff Pruning: Parameter-Efficient Transfer Learning with Diff Pruning\\n[6] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks\\n\\n## W6. Efficiency gains\\nThank you for your observation. While it may seem that the efficiency gains compared to LoRA-style methods are modest, our results demonstrate that RoCoFT consistently outperforms LoRA-type methods in terms of accuracy, parameter efficiency, memory usage, and training time, as shown in the results section.\\n\\nThe novelty of RoCoFT lies in its simplicity and its approach of directly modifying existing model parameters without introducing additional adapters or external modules. Unlike adapter-based methods, which require extra trainable parameters and introduce memory overhead, RoCoFT avoids these complexities while still achieving competitive or superior performance.\"}",
"{\"comment\": \"## W5. Finetuning through the lens of NTK regression\\nWe believe there are several misunderstandings over the purpose of NTK experiments, and we are sorry the brevity of our presentation due to page limits could be responsible for this. We would try to clarify in the following.\\n\\n1. It is actually not obvious that the NTK defined by the full parameter set would be similar to the NTK defined by just a few rows/columns, since the NTK of the row/column parameter set is NOT obtained by \\\"changing one row/column\\\" of the NTK of the full parameter set. From the definition of NTK in Section 5 on p7 $K_\\\\theta(x, x') = <\\\\nabla f_\\\\theta(x), \\\\nabla f_\\\\theta(x)>$, it is essentially a sum over inner product of gradients. With the NTK over full parameter set it is a sum over gradients of all parameters, and the NTK for row/column is just a sum over gradients of the row/column parameters. The size of the sum is very different. This is reflected by the big difference in magnitude in the NTK values (for example in Figure 4), but when rescaled the similarity patterns over samples defined by the two different NTKs are strikingly similar. \\n\\n2. All the NTKs are computed with the pretrained model without any finetuning (noted by line 391 on p8), and we do not compute any NTKs after finetuning. Indeed in Section 5 we do not perform any finetuning at all. The whole point of NTK analysis, as in the original NTK paper, is that neural network training can be approximated by kernel regression using NTK defined by the initial parameters, under the infinite width limit. Malladi et al extends this analysis to the finetuning of LLMs. So if full finetuning can be approximated by kernel regression over the full parameter NTK and RoCoFT can be approximated by kernel regression over the row/column NTK (asymptotically), then the closeness of the full parameter NTK and row/column NTK serves as independent evidence on why full finetuning and RoCoFT can have similar performance. This is our main motivation for exploring NTKs as a way to understand why finetuning with some few parameters work. Also, since NTKs are defined by fixed initial parameters of the pretrained model, good performance of NTK kernel regression (either with the full parameter NTK or row/column NTK) indicates that good features for downstream tasks are already contained in the pretrained model (without further feature learning). This is the other significance of NTK analysis. \\n\\nIn [1,2], it is shown that overparameterized networks behave linearly around their initialization, thus\\nyielding a model equivalent to learning with positive-definite kernels. In [3], this phenomenon is further extended to finetuning. \\n\\n[1] Chizat, L., Oyallon, E. and Bach, F., 2019. On lazy training in differentiable programming. Advances in neural information processing systems, 32.\\n\\n[2] Jacot, A., Gabriel, F. and Hongler, C., 2018. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 31.\\n\\n[3] Malladi, S., Wettig, A., Yu, D., Chen, D. and Arora, S., 2023, July. A kernel-based view of language model fine-tuning. In International Conference on Machine Learning (pp. 23610-23641). PMLR.\\n\\n## W6. Presentation of the method\\n Section 3 is dedicated to explanation of RoCoFT method. The update weight matrices $\\\\mathbf{R}$ and $\\\\mathbf{C}$, are introduced in equation (2). Optimal properties of $\\\\mathbf{R}$ and $\\\\mathbf{C}$ such as rank and robustness to selection criteria of rows and columns, since being numerically investigated, are explained in section 6. In particular, we explain in the paper that \\\"Figure 5 presents the comparative results of these four strategies on the SST-2, RTE, QNLI, CoLA, and MNLI datasets for rank $r=4$. Across all datasets, the results show consistent robustness, indicating that our method performs well regardless of the selection criteria\\u2014whether based on Max, Min, MinMax, or random selection of rows or columns.\\\"\", \"title\": \"Response to weakness points 5-6\"}",
"{\"comment\": \"Dear reviewer Hy7a:\\n\\nThank you for valuable recommendations about presentation of our work. Please note that incorporating all the requested changes from reviewers could exceed the page limit, especially for related works that we cannot put into the appendix. We don\\u2019t want our paper to be rejected during the review stage due to page limit issues. Therefore we chose to include the answers in this discussion thread and extra tables and results in the appendix before the discussions are finalized. Sorry for not having included these changes directly into the paper last week but unfortunately we cannot modify the pdf at this stage. We are grateful for your suggested changes especially the list of extra references which is very helpful. We will include them in revisions of our paper.\\n\\n**W1-2:** We will add this to the introduction:\\n\\n\\\"Apart from low-rank adaptor methods Sparse Fine-Tuning is another group of PEFT methods that focuses on directly training only a very small subset of model parameters during finetuning. Sparsity can be achieved in two different ways, either by pruning after full finetuning or by selecting a sparse set of masks to train before finetuning. Diff pruning[4/8] encourages sparsity by using L0-norm regularization during finetuning, while [7] makes use of the Lottery Ticket Hypothesis[1] to prune the weights after full finetuning. Unlike our proposed method they both require computational costs close to full finetuning. [3] selects submatrix blocks as masks using maximal gradient change during warmup as criterion, while [5] selects masks based on Fisher information. Both require some precomputation before a sparse mask can be selected for finetuning. [2] selects unimportant weights for task-agnostic finetuning while [6] propose a finetuning method that can adaptively grow or shrink the sparsity pattern during finetuning. Unlike our method which uses only rows and columns, these sparsity masks can be unstructured patterns and less efficient in actual implementation. Our method can be seen as belonging to both low-rank adaptor methods and sparse fine-tuning, as with few rows or columns chosen the updates are naturally both low-rank and sparse.\\\"\\n\\n**W4:** While, we agree applying RoCoFT on newer pretrained models e.g., Llama3-8B, Llama3.1, Minstrel should be conducted in future works, the current results on Roberta, DeBERTaV3, BART, BLOOMz-7B, GPT-J-6B, LLaMA-2-7B, LLaMA-2-13B aligns well with the benchmarks commonly used in the PEFT literature. We believe these results are strong and the PEFT community should know about it.\\n\\n**W6, Q3:** This requires minor edition on captions of figures 2,3,4 and 5 to reflect the trained model and fine-tuning configurations. \\n\\n**Q4:** We will add this in the appendix:\\n\\nRoCoFT replaces the layers in the pretrained model with custom modules that update selected rows or columns, managed with nn.Linear(). We split the weights into trainable and non-trainable portions. Only a subset (rank k) of the original weight matrix is marked as trainable, while the remaining parameters are moved to a buffer (non-trainable). This avoids memory overhead from additional trainable adapters or binary masks and enables parameter-specific updates without dynamically creating new tensors. The non-trainable weights are detached and stored in buffers to save memory, ensuring no gradients are computed or stored for these weights. This approach allows us to avoid the full memory cost typically incurred with trainable adapters like in LoRA. PyTorch pseudocode for replacing a linear layer in transformer is as below\\n```python\\nclass RoCoFTRow(nn.Module):\\n # inputs: F is the Linear Layer to be converted\\n # r is the rank (number of rows/columns to be selected)\\n # use_bias decides whether to train bias term\\n def __init__(self, F, r, use_bias):\\n # Set rows 1 to rank r as trainable weights\\n self.trainable_W = nn.Parameter(F.weight[:r, :].clone())\\n\\n # Set rows r and above as non-trainable and move to buffer\\n self.register_buffer('non_trainable_W', F.weight[r:, :].clone().detach())\\n\\n # Handle bias\\n if F.bias is not None:\\n self.bias = nn.Parameter(F.bias.clone().detach(), requires_grad=use_bias)\", \"else\": \"self.bias = None\\n\\n def forward(self, x):\\n full_weight = torch.cat([self.trainable_W, self.non_trainable_W], dim=1)\\n out = torch.nn.functional.linear(x, full_weight, self.bias)\\n return out\\n```\\nThe version for columns are implemented similarly. During training, RoCoFT does not require full forward and backward propagation for all parameters. Instead, the concatenation of trainable and non-trainable weights is performed once and stored, so the model does not introduce additional computational modules or overhead during the fine-tuning process.\"}",
"{\"title\": \"Response to Questions 1-9\", \"comment\": \"## Q1. Notion of 1-row and 1-column\\n$RoCoFT_{\\\\textrm{1-Row}}$ and $RoCoFT_{\\\\textrm{3-Row}}$ respectively select the first row and first three rows of each weight matrix for in-place finetuning. In the revised version, we clarified this in line 205 as \\\"RoCoFT$_{r\\\\textrm{-Row(Column)}}$ finetunes the model according to equation (2), where in $R$ and $C$ the first $r$ rows(columns) are nonzero, respectively\\\".\\n\\n## Q2. Typo in RoCoFT 3-row using 5 times the number of parameters as the 1-row\\nThank you pointing this out. There was a Typo in reported TTPs in Table 1, which is corrected in the revised manuscript.\\n\\n## Q3. Comparison with BitFit\\nThe BitFit Method is included in Tables 1,2 and 4. Please note that Bitfit in terms of number of trainable parameters is only comparable to $RoCoFT_{\\\\textrm{1-Row(Column)}}$. In other words only when rank is 1. When finetuning with BitFit, one cannot scale the number of trainable parameters. Therefore, we mostly used LoRA for comparison. \\n\\n## Q4. Typos\\nThank you for pointing out the typos and grammatical errors. We corrected them.\\n\\n## Q5. Abbreviations in the abstract \\nIn the revised manuscript, we expanded the abbreviations in the abstract.\\n\\n## Q6. Selection strategy in the abtract\", \"in_the_revised_paper_we__included_this_in_the__abstract\": \"\\\"Ablation studies are conducted to investigate the impact of different algorithmic choices, including the robustness of RoCoFT to any selection of rows and columns, as well as the optimal rank for the effective implementation of our method\\\".\\n\\n\\n## Q7. Notion of a/b for metrics\\nThank you for raising this point, we agree that the captions were not clear. In table 1, Accuracy/F1 score for MRPC and QQP is reported and Pearson/Spearman correlations for STS-B. \\nIn table 2, for SQuADv1.1 and SQuADv2.0 datasets results are reported using Exact Match (EM)/F1 scores and for CNN/Daily Mail datasets results are reported using ROUGE metrics as ROUGE-1/ROUGE-2/ROUGE-L. The captions are edited now to clearly reflect the metrics.\\n\\n## Q8. Overfitting in full finetuning\\nYes, we agree that there could be overfitting with FT for some of the smaller datasets like RTE. But we believe this is exactly the reason why we should consider alternatives to full finetuning like many PEFT methods when the downstream task dataset is small. We could apply regularization like early stopping or dropout to FT but this could be more tricky to get right than just directly using PEFT methods.\\n\\n## Q9. Optimal rank evaluation not matching Table 1\\nPlease note that in Table 1 $RoCoFT_{\\\\textrm{1-Row}}$ and $RoCoFT_{\\\\text{1-Column}}$, have 0.083M TTPs comprised of the attention and classifier layer, whereas in Table 7, in order to illustrate the effect of rank, we only finetune the attention layers with 0.022M TTPs. Therefore, comparing the accuracy of SST2 in these two tables is not fair.\"}",
"{\"comment\": \"Dear authors, thank you for your efforts to provide additional clarifications.\\n\\n**W2. Consistency of Table 5 with Table 1**\\nThank you for your clarification, I would suggest clarifying it further in the manuscript rather than addressing Malladi et al. (2023) work.\\n\\n**W3. The behavior of the model as an adaptor with In-place updates**\\nPlease add it to the main text. \\n\\n**W4. Performance on MNLI, QNLI, and RTE datasets**\\nI'm sorry for the using wrong terminology. I meant why the method's performance is not better on these datasets like on others. \\n\\n**W5. Additional baseline methods**\\nThank you for the additional results. I would suggest including these results and results on other datasets from GLUE benchmark also in your revised manuscript.\\n\\n**W6. Efficiency gains**\\n\\nI agree that simplicity is the key component of your method. However, the main contribution of a new adaptor method should be, in my view, its accuracy. \\nBy considering new methods that compress the base model and/or train small adaptors, the memory efficiency of the proposed method is a less desired property. \\nMoreover, it is hard to see any significance in the accuracy gains of the proposed method compared to the baselines.\\n\\nWhile I see this method as simple and competitive with the baselines, it is hard to see additional novelties that should be published in this top-tier conference. In addition, while the method is evaluated on a diverse set of datasets, more recent baselines should be included with complete comparison (like all datasets from GLUE). \\n\\nI also don't any changes in the manuscript.\\n\\nI prefer to preserve my score.\"}",
"{\"comment\": \"Dear reviewer cVGh,\\n\\nThank you for your feedback during the rebuttal and highlighting BitFit as a related work to acknowledge in the paper. BitFit is only one related work and RoCoFT can be considered similar to other PEFT methods already thoroughly discussed in the paper. This is while we did not entirely ignore the BitFit method and it was discussed in our initial submission. Eventhough uploading the revised manuscript is not possible at this stage, please note the following revisions that we will make on our paper in the final submission:\\n \\n$\\\\bullet$ Acknowledgement of the BitFit method and stating the limitations of it compared to RoCoFT is definitely our goal in the final revision of our paper. \\n\\n$\\\\bullet$ The missing spaces in \\\"benchmarks(Jiang et al., 2024)\\\", \\\"rows(columns)\\\", \\\"Fine-Tuning\\\" and \\\"Finetuning\\\" being used interchangeably which require minor revision of the manuscript.\\n\\nWe believe the findings of our paper is strong and the PEFT community should know about it. If there are no remaining technical concerns, we would greatly appreciate your reconsideration of the score. Thank you again for the valuable comments on our work.\\n\\nSincerely,\\n\\nAuthors\"}",
"{\"title\": \"Response to weakness points: updates with new numerical experiments.\", \"comment\": \"We thank the reviewer for detailed feedback recommendations which enriched our work.\\n## W1. NTK analysis for LoRA\\nFollowing your suggestion we have tried running NTK on LoRA with rank 1 on a few datasets. The kernel regression results on 16-shot learning are as follows (compared to Table 5): \\n\\n| Dataset | SST-2 | MR| CR| QNLI| RTE| QQP | \\n|---------|------------|---------|-----------|--------|-----------------|--------|\\n| LoRA | 88.5(0.7) | 84.5(1.4) | 93.2(0.5) | 59.9(3.0) | 58.8(4.7) | 58.2(2.6) | \\n\\nWe have also updated the Figures 4 and 7 of our paper on corresponding NTK plots for these datasets. They look similar to the NTK for full parameter set but are not visually as close to the NTK for row/column parameters. But in terms of relative l1/l2 distance to the NTK full parameter set (compared to Table 6) the NTK for LoRA is also close: \\n\\n| Metric | SST2 | MR | CR | QNLI | RTE | QQP |\\n|---|----------------|----------------|----------------|----------------|----------------|----------------|\\n| p=1 | 0.090 (0.022) | 0.086 (0.021) | 0.108 (0.024) | 0.077 (0.017) | 0.119 (0.027) | 0.084 (0.021) |\\n| p=2 | 0.108 (0.028) | 0.102 (0.027) | 0.132 (0.036) | 0.103 (0.025) | 0.150 (0.039) | 0.106 (0.027) |\\n\\nHowever, our motivation for showing the NTKs for full parameter set and row/column parameters are close is to provide evidence to explain why FT and RoCoFT can have similar performance. We do not intend to use the similarity to the NTK of the full parameter set as a way to rank the performance of PEFT methods, as there are several subtleties involved, including how well the NTK kernel regression can approximate each of the PEFT method. For example, for LoRA since there are new adaptor variables involved, unlike the NTK for FT or RoCoFT, the corresponding NTK for LoRA depends on how those variables are initialized (we use default initialization in the huggingface PEFT library in the above experiemnts). The closeness of the NTKs of these PEFT methods to the NTK of full parameter set just suggest the corresponding PEFT method could have performance close to full finetuning. But it would be indeed an interesting future work to use NTKs to perform a more refined analysis of different PEFT methods.\\n## W3. Empirical improvement in memory cost\\nThe apparent discrepancy between the marginal improvements in memory costs (Figure 2) and training time costs (Figure 3) compared to the larger improvements suggested in Table 4 arises because Figures 2 and 3 reflect experiments conducted with the Adam optimizer, which inherently doubles the memory requirement due to maintaining additional states (e.g., moment estimates) for each parameter. This effect reduces the relative advantage of RoCoFT, as the optimizer memory dominates the overall cost. In contrast, Table 4 highlights the theoretical memory and computational efficiency specific to RoCoFT's architecture, independent of optimizer overhead, demonstrating the substantial savings achieved by our method in terms of trainable parameters and memory efficiency. We will clarify this distinction in the revised paper.\\n\\n## W2. Few-shot learning for kernel comparison\\nThis is mainly due to high computational costs of computing NTK matrices, which scales as $N^2$ where $N$\\n is the number of training examples. As we are trying to understand why FT and RoCoFT can have similar performance using NTK analysis, we believe performing the analysis on a representative subset of the datasets is sufficient. Also the results for K=64 shot are already fairly close to finetuning using the full training set on many datasets.\"}",
"{\"comment\": \"Dear reviewer Hy7a,\\n\\nWe are grateful for your constructive feedback, which has greatly contributed to improving the quality of our work.\\nWe would be happy to address any remaining concerns you may have regarding the revised manuscript and the new experimental results. Please feel free to provide further comments or suggestions, and we will make every effort to incorporate them promptly and thoroughly. We look forward to your feedback.\\n\\nSincerely,\\n\\nAuthors\"}",
"{\"title\": \"Respond to reviewers\", \"comment\": \"I would like to thank the authors once again for their efforts to improve their paper. Unfortunately, as I mentioned earlier, the paper requires substantial revision before it can be considered for publication.\\n\\nSpecifically, the contribution of this work should focus on differentiating the proposed solution from the one previously presented in BitFit. The current version of the paper does not effectively convey this distinction, and the reader does not gain valuable insights into what the authors did to ensure the success of their method compared to previous approaches.\\n\\nAs a suggestion, I recommend that the authors emphasize these aspects and consider developing an optimal method for row/column selection that is suitable for this type of fine-tuning.\"}",
"{\"title\": \"Response to weakness points 4-6\", \"comment\": \"## W4. More SoTA Models\\nThank you for pointing this out. We appreciate the suggestion to include newer state-of-the-art models, such as Llama3-8B, Llama3.1, to better align with the rapid advancements in pre-trained models. While our experiments currently focus on widely used models like DeBERTaV3 and LLaMA-2 to establish the effectiveness of RoCoFT, we acknowledge that incorporating newer models would further strengthen the study by demonstrating its scalability and applicability to the latest architectures. Since the suggested newer models were only a few months old when we submitted our paper and there were not a lot of published results on them from the PEFT literature compared to LLaMA-2 or DeBERTaV3, we decided to go with the older models given our computational resources constraints. \\nIn future work, we plan to extend our evaluations to include these emerging models. However, it is important to note that resource constraints, such as computational requirements and availability of pretrained checkpoints, can impact the feasibility of incorporating newer models within the current scope. Nonetheless, we are committed to adapting RoCoFT to the most recent developments in the field to ensure it remains relevant and competitive.\\n\\n## W5. Typos\\nThank you for pointing out the typos. We corrected them.\\n\\n## W6. Clarity of Baseline Model in Figures\\nThank you for this helpful observation. We used the RoBERTa-base model as the base model for fine-tuning in Figures 2, 3, 4, and 5, with a batch size of 32. We will update the figure captions and descriptions in Section 4 to clearly indicate the base model and batch size used for reporting memory, time costs, and accuracy metrics, improving the transparency of our experimental setup.\"}",
"{\"title\": \"Response to weakness 4\", \"comment\": \"## W4. Novelty and Comparison with random selection\\nWe disagree with the review on performance of row-column updates being as good as random selection. We kindly refer the reviewer to tables 1,3,10 and 11 for comprehensive study of this comparison. Please see a vignette below:\\n\\n**BLOOMz$_{7B}$** ( Tables 3 and 11)\\n\\n| **Method** | **# TTPs** | **BoolQ** | **PIQA** | **SIQA** | **H.Sw.** | **W.Gra.** | **ARCe** | **ARCc** | **OBQA** | **M.Ar.** | **G.8K** | **A.S.** | **S.eEq** | **S.MP** |\\n|-----------------------|------------|-----------|----------|----------|-----------|------------|----------|----------|----------|-----------|----------|----------|-----------|----------|\\n| RoCoFT$_\\\\text{3-Row}$ | 13.37M | 66.33 | 74.53 | 73.56 | 56.60 | 72.14 | 73.29 | 57.48 | 72.92 | 79.76 | 70.94 | 70.95 | 70.90 | 54.42 |\\n| 1% of the model parameters selected through uniform sampling | 70.4M | 65.76 | 74.62 | 73.50 | 56.39 | 72.11 | 72.89 | 56.88 | 72.43 | 79.78 | 71.11 | 70.76 | 70.91 | 54.37 |\\n\\n**Roberta$_\\\\text{Large}$** ( Tables 1 and 10)\\n\\n| **Method** | **# TTPs** | **CoLA** | **SST2** | **MRPC** | **STS-B** | **QQP** | **MNLI** | **QNLI** | **RTE** |\\n|----------------------|------------|-----------|-----------|----------|--------------|---------------|---------------|-------------|-----------|\\n| RoCoFT$_\\\\text{3-Row}$ | 0.666M |67.39 | 96.69| 91.05/92.19| 92.10/92.10 | 90.82/86.11 | 90.98 | 94.85 | 87.83 |\\n| 10% of the model parameters selected through uniform sampling | 35.5M | 65.32 | 96.59 | 90.93/92.03 | 92.10/92.05 | 90.97/86.78 | 90.89 | 95.06 | 87.91 |\\n\\nIn the tables above, please note the significantly lower number of TTPs in RoCoFT while having competitive or higher accuracies.\\nMoreover, the novelty of our paper is twofold. First, to the best of our knowledge, the row-column update of the weight matrices despite being starkly low-complexity and showing competitive accuracies, is not proposed as a method in previous papers. Moreover, explaining the effectivity of fine-tuning method by means of NTK regression is at its infancy in the literature. We believe the NTK regression, which we investigated, is a powerful tool for analyzing the learning dynamics of finetuning.\"}",
"{\"title\": \"Response to weakness points 1-3\", \"comment\": \"We sincerely appreciate the time and effort the review has dedicated to evaluating our submission. We would be happy to discuss any additional concerns the reviewer may have.\\n\\n## W1. Typos and clarity\\nThank you for raising these points. In the revised manuscript, we corrected the typos and clarified the points raised by the reviewers. The changes to the text is marked with \\\"...\\\" in the response to the reviewers.\\n\\n## W2. BitFit\\nThank you for pointing out this ablation study in the BitFit paper[1], we missed this particular ablation study when we read the paper. The random row/column update in [1] is indeed similar to our proposal but there are several important differences in terms of motivation and implementation. In terms of motivation the authors of the BitFit paper intend to show for the same number of parameters, updating the bias parameters is better than updating other parameters like random rows/columns. But the main limitation of BitFit is the limited number of bias parameters which makes it difficult to increase the capacity of the finetuning model, and by using row/column updates we can easily increase the capacity of the finetuning model since there are many more row and column parameters than bias parameters. In terms of implementation there are also several differences (with reference to the github BitFit implementation). For example, we just use rows or columns alone without mixing them, we don't update the Layernorm parameters, and we don't use gradient masks in our implementation as it is less efficient. \\n\\nAnd as for the difference in performance between BitFit and updating random rows and columns in their ablation studies, we spent some time running the following comparison experiment. \\nWe run their BitFit github code using the recommended parameters listed in Table 6 of their appendix using bert-base-cased, and we use random seeds 0-4 for the smaller datasets in the GLUE benchmark. Below are the results: \\n\\n| **Method** | **CoLA** | **SST2** | **MRPC** | **STS-B** | **RTE** |\\n|----------------------|------------|-----------|-----------|----------|------------|\\n| Bitfit | 57.81(1.01)| 90.73(0.24)| 89.92(0.21) | 88.00(0.06) | 70.90(2.59)|\\n\\nThese numbers are a bit lower than those reported in Table 3 of the paper, which could be due to different random seeds used or a different implementation before they release their github code. \\n\\nInstead of using their random row/column implementation due to efficiency issues with gradient masks, we directly replace the model with our RoCoFT model implementation (also in the submitted supplementary materials) using single row without bias, and we just use their optimization and evaluation code. For hyperparameters we don't do a detailed search but only choose from Table 6 of the BitFit paper or Table 9 of our submission (which was for Roberta). And we obtain the following results: \\n\\n| **Method** | **CoLA** | **SST2** | **MRPC** | **STS-B** | **RTE** |\\n|----------------------|-----------|-----------|----------|------------|------------|\\n| RoCoFT$_\\\\text{1-row}$ | 56.75(0.59)| 90.78(0.21)| 89.07(0.80) | 86.27(0.22) | 66.20(1.34)|\\n\\nApart from RTE and STS-B the results are very similar (within standard error). For RTE we do notice updating bias is better than updating 1 row or 1 column, as it is also seen in Table 1 of our submission (about 1 point difference between BitFit and RoCoFT 1-row or 1-col). For STS-B the difference could be due to difference between bert-base-cased and Roberta. We believe the difference between updating bias and updating 1 row or 1 column is much smaller than the numbers reported in the BitFit paper, either due to the choice of random seeds or the specific implementation on random mixing of rows and columns. \\n\\nWe want to emphasize that we believe BitFit is a very strong method when the number of parameters are limited. In the LoRA paper BitFit is just as good as LoRA of rank 1, and in Table 1 of our submission BitFit has almost the same numbers as RoCoFT 1-row and 1-col, if not better. The main issue is unlike LoRA and our method, it cannot increase its capacity to rank 2, 3, 4 or above to improve its performance. \\n\\n## W3. Credit to Bitfit\\n We understand that the reviewer is concerned with similarities and comparison of the BitFit method with RoCoFT. However the BitFit finetuning method freezes all the parameters in weights and classificatoin layers and finetunes only the additive bias terms. Row/column update is done in table 1 of [1] only as an ablation study and not the main method. We kindly refer the reviewer to line 92 of manuscript where we cited the BitFit paper as one of the main related works and Tables 1, 2 and 4, where we compared our method with the BitFit method.\\n\\n[1] Zaken, E.B., Ravfogel, S. and Goldberg, Y., 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199.\"}",
"{\"summary\": \"In this paper, the authors address the challenge of efficiently adapting a large language model to a new task. This problem, known as Parameter Efficient Fine Tuning (PEFT), has gained significant attention in recent years following Lora's success. The main observation in this paper is that training only a small subset of rows or columns of the original weight matrices is sufficient for attaining good performance on the new task. This means fine-tuning could be performed by updating a few parameters with no memory overhead (as required with Lora-style methods). This type of fine-tuning is evaluated on multiple datasets and using several base models. The results demonstrate that this approach is competitive with leading baselines.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The authors study an important problem in LLMs.\\n\\nThe method is relatively efficient and lightweight.\\n\\nThe evaluation covers multiple transfer tasks and several base models.\\n\\nThey provide an NTK based empirical evaluation that aims to explain the observed phenomenon.\", \"weaknesses\": \"The paper is not well written, multiple parts are not clear, and there are many typos.\\n\\nIn essence, the method presented in this paper was already presented in another paper [1].\\nIn fact, in [1] the authors wrote:\\n\\n\\u201c We randomly sampled the same amount of parameters as in BitFit from the entire model, and fine-tuned only them (\\u201crand uniform\\u201d line in Table 3). The results are substantially worse across all tasks; similar patterns are observed when the random parameters are sampled as complete rows/columns in the parameter matrices (\\u201crand row/col\\u201d line in Table 3). \\u201c\\n\\nWhich basically indicates that the authors in [1] have already evaluated the procedure detailed in this paper and concluded that updating the bias terms (also known as fitbit) is better. \\nThe results by the authors demonstrate comparable results between single row/column updates and fitbit. In contrast, the authors in [1] demonstrated that row/column update does not work well in some datasets. Can the authors explain why there is a performance gap between the evaluation in [1] and what is reported in this paper?\\n\\n\\nAnother problem, is that proper credit is not given to [1], which were the first to propose using row/column updates.\\n\\n\\nEven when ignoring the fact that this idea was already presented in [1], I can still see value in providing new insights about this row/column optimization scheme. But I don\\u2019t see the paper providing such new insights in its current form. The row/column selection strategy is pretty standard, and the evaluated selection schemes work as well as random selection.\\n\\n\\n\\n[1] Ben Zaken et al. BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models.\\n\\nThe NTK perspective is also unclear; since it is primarily empirical, I don\\u2019t see what new intuition is gained from these evaluations. It is intuitive that changing only one raw/column from the entire matrix won\\u2019t change the NTK much, but also that a few steps of fine-tuning won\\u2019t. If anything, the authors should have also compared these kernels to the original kernel (before fine-tuning). If they are all similar to the original one (before fine-tuning), then I don\\u2019t understand what we gain from this insight. \\n\\nIn terms of presentation, the paper needs significant improvement. Currently, the results are presented without providing a clear explanation of the \\u201cmethod.\\u201d Specifically, the scheme for the selection of rows and columns is only described in the results section.\", \"questions\": \"In the results, the authors detail methods termed 1-row and 3-row without explaining what those are.\\n\\nAlso, regarding the number of parameters, it seems that the 3-row uses 5 times the number of parameters as the 1-row. So, is it three rows vs. one? Something does not make sense here.\\n\\nWhy are the methods presented in Table 1 not included in all other tables?\\nFor example, why isn\\u2019t Fitbit (which is the most related paper to this one) not included in Tables 2+3+4, figure 2+3?\", \"multiple_typos\": \"\\\"paradiagm\\\" -> \\\"paradigm\\\".\\n\\\"mermory\\\" -> \\\"memory\\\".\\n\\\"signficant\\\" -> \\\"significant\\\".\\n\\\"tranformer\\\" -> \\\"transformer\\\".\\n\\n\\u201ccomputation-efficient\\u201d -> \\u201ccomputationally efficient.\\u201d\\nIn the abstract \\u201cour kernel\\u2026are numerically\\u201d-> should be *is* numerically\", \"several_abbreviations_are_mentioned_in_the_abstract_without_introducing_what_they_mean\": \"RoCoFT, PEFT..\\n\\nNo intuition about the selection is provided in the abstract.\\n\\nMany times, two numbers are presented without explaining what they mean, for example, in line 203 85.65/90.61?! And in many cases, in the tables. This is not clear, even from the caption, which tries to explain what they mean.\\n\\nIn some cases, as shown in Table 1, FT is substantially worse than many low-rank methods, for example, in RTE. Doesn\\u2019t this suggest that there is severe overfitting?\\n\\nIn the optimal rank evaluation, the performance of the RoCoFT method is not consistent with the results of the same method presented in Table 1 (for this data, SST2).\\n\\n\\nOverall, I would recommend the authors rewrite the paper as an \\u201cinsight paper\\u201d, which provides empirical evaluations that support a phenomenon, rather than a \\u201cmethod paper\\u201d. It would also be valuable to look into the dedicated scheme for selecting the rows/columns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EQAHilKZ8D | Utilizing Visual Properties to Achieve Better Representations of Objects | [
"Zhiyu Xu",
"Qingliang Chen"
] | In recent years, large vision models have made significant advancements and excelled in tasks such as detection, segmentation, and tracking. This is partly due to vision models‘ good representation of visual objects. Although the recently proposed SAM (the Segment Anything Model ) or the one/few-shot models based on SAM have wide applicability across many tasks, some researchers have found that they do not perform well on certain downstream tasks . In this paper, we focused on a specific group of these objects, which can be summarized as glass-like objects, and quantitatively studied the inadequacies related to the vision models’ feature representation of glass-like objects using the representation accuracy(RA) metric we proposed. Then, we proposed a novel, extremely simple method that introduces almost no additional computations to address these inadequacies. The main idea is utilizing the visual properties of target objects to find representation dimensions which dominate in recognizing them and leveraging these information accordingly to achieve better representations of target objects. Using representation accuracy and setting these representations as reference in one-shot segmentation tasks, our experiments demonstrated the substantial effectiveness of our method. | [
"Vision",
"Segmentation"
] | Reject | https://openreview.net/pdf?id=EQAHilKZ8D | https://openreview.net/forum?id=EQAHilKZ8D | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"swhDgsDQyA",
"olg4QUQNFL",
"n10W12mIK1",
"jijiCHJbzY",
"cD1J3UfaNN",
"LI0eGzyMN8",
"0o4Cy1thy5"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision",
"official_review",
"meta_review",
"official_review"
],
"note_created": [
1730182641299,
1730542226162,
1729612788035,
1737523556562,
1729584959302,
1734576753414,
1730491736172
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3125/Reviewer_SQan"
],
[
"ICLR.cc/2025/Conference/Submission3125/Reviewer_8qBQ"
],
[
"ICLR.cc/2025/Conference/Submission3125/Reviewer_KPFD"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3125/Reviewer_3ufn"
],
[
"ICLR.cc/2025/Conference/Submission3125/Area_Chair_n2Kh"
],
[
"ICLR.cc/2025/Conference/Submission3125/Reviewer_HqNi"
]
],
"structured_content_str": [
"{\"summary\": \"The paper is about segmenting reflective objects. First, they show that local patch representations of glass/reflective surfaces mostly captures the underlying background (and the the glass object itself). Then they propose a feature engineering process to alleviate this effect. They show slight improvements on NN patch retrieval tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"-The paper sheds light on the not so common problem of segmenting reflective objects.\", \"weaknesses\": \"-The manuscript is contains many spelling/grammar mistakes e.g. L081.\\n\\n-The citations are not properly handled (citep vs citet) e.g. L097.\\n\\n-Some claims are too bold and not justified e.g. \\u201cMatcher uses DINOv2 with a ViT-L/14 as the default image encoder and also in this paper authors found that DINOv2 has better patch-level representation ability than SAM, which promotes exact patch matching between different images so it can be considered that DINOv2 is the best VFM for representing similarities at the patch level.\\u201d\\n\\n-The RA metric is not novel, it is the accuracy of a NN retrieval classifier.\\n\\n-The paper hard to read. The introduced notation does not dislose scalar/matrix/tensor dimensions which makes it confusing.\\n\\n-The addition and removal of glass barriers is not clearly defined. The definition of a \\u201cglass barrier\\u201d is also unclear to me at this point. Figure 2 supposedly explains this but there is not pointer to that figure in the text if I am not mistaken.\\n\\n-From a high-level point of view, what the authors are doing is labeling additional data. I don't think their method is superior to training/finetuning the feature extractors with the additional labeled data.\", \"questions\": \"-Overall, I don't think a rebuttal would clear doubts I have and think the authors should deeply revise the paper. The content could be improved by adding additional data-driven baselines (i.e. machine learning based) and by taking into consideration the suggestions listed in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper considers the difficult problem of dealing with glass and mirror objects in images.\\nIt demonstrates, based on a newly introduced metric, the poor performance of representations from standard vision foundation models, in particular in the form of the one-shot segmentation method Matcher, when evaluated specifically on such data. The paper then proposes a scheme to select, in a supervised manner based on a few images, feature dimensions that are most affected by the addition/removal of the glass. By 'correcting' these dimensions before applying the Matcher algorithm, they show a small improvement in the results on three datasets.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The paper tackles a challenging open problem in visual understanding.\", \"A new metric is introduced.\", \"The core idea of selecting feature dimensions related to the effect of glass or mirrors has some potential, but should be worked out in a more clever way.\"], \"weaknesses\": [\"The paper is badly structured and, as a consequence, hard to follow. Figures are not referred to in the text. Text has wrong references to tables (e.g. l. 199 refers to Table 2, that has mIOU results while text discusses RA results). There are a lot of forward references, e.g. section3 on problem analysis discusses results of tables from section 5 on experimental results, without telling the reader what data is used or what the exact setup is.\", \"Overall, the above point makes it hard to know precisely what the authors did exactly. I had to make some guesses at several points. Most importantly, I'm still not sure about the actual task they are performing / evaluating. In some parts it's suggested this is about segmentation of glass/mirror objects. But in other places it's about matching between a reference image and a target image. I assume what the authors did in the end is close to the one-shot segmentation of Matcher.\", \"The method is not described rigorously, making it impossible to reproduce the results. In particular, the 'most' function in eq. 4 is only vaguely described. The text refers to the appendix for more details, but their only numerical values are given, still not explaining the precise algorithm.\", \"The reported results are anecdotal. Results are reported only for 3-4 images per dataset. It's unclear which images these are and how they were selected. At the very least, averaged results over the entire dataset should be reported.\", \"Results show only a minor improvement (in the range of 1 or 2 %) over the very poor results of the baseline. Results are reported for one set of (manually selected?) training images. No details on how these images were selected are given. At the very least, the results should have been repeated for different sets of training images, so the standard deviation on the results could be added to the tables and the reader could get an idea on whether these results are significant or not.\", \"Given that a new metric is used, more naive baselines should be added: what RA values would one get with a random representation ? Are the numbers reported significantly better ?\", \"The whole paper builds on one baseline work, Matcher. The proposed method is applied only on top of that method and the results are only compared against that method. Other state-of-the-art methods, or extra baselines, should be added to the comparison. There is no further analysis, such as a sensitivity analysis of the hyperparameters used, an ablation study or comparison of different variants of the method (e.g. determining the lambda parameter for each of the selected dimensions separately, based on the observed differences in the training data). There are no qualitative results included neither.\"], \"questions\": \"There are a lot of improvements necessary to bring this paper to the level required by ICLR.\\nI have several questions, mostly related to clarifying confusion (see above), but I don't think any answer will make me change my opinion on this paper, as it's lacking in several directions (contribution, clarity, experimental validation).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents a feature modification approach for DINOv2 features aimed at enhancing Matcher\\u2019s [1] ability to segment glass-like objects. To achieve this, authors identified the feature directions corresponding to glass-like appearances by comparing features from glass or mirror regions using a manually labeled subset of 11 image pairs. The approach was evaluated on a subset of either 3 or 4 images from different datasets.\\n\\n---\\n[1] Yang Liu et al., Matcher: Segment anything with one shot using all-purpose feature matching., arXiv\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The high-level motivation of modifying foundation model features for specific object types is sound.\"], \"weaknesses\": [\"The paper is difficult to follow, lacks a logical flow in its presentation, and is missing a convincing motivation and generalizable results. The notation is improperly used and inconsistent throughout. Additionally, the paper does not adhere to general writing guidelines, and figures are not referenced in the text.\", \"**Structural Problems**\", \"The concept of Representation Accuracy (RA) is discussed at length, yet it lacks a formal definition or a clear explanation in plain English. RA is neither well-motivated nor supported by any explanation or experiment validating its usefulness as a metric. Furthermore, Equation 2 is not adequately explained as presented.\", \"All experiments are conducted on randomly sampled image sets of only 3 or 4 images, which is insufficient to support claims of generalization. As a result, the claims in the paper are limited to being a proof of concept for manually editing features for a small number of images, rendering all algorithmic claims unsupported.\", \"In Table 1, due to the lack of motivation and explanation for RA, the experimental results and conclusions appear disconnected. Since the experiments are based on just 3 or 4 selected images, the results presented in Table 2 are also not valid.\", \"**Some Writing Issues**\", \"Section 3 contains several notation issues. For instance, in Equation 1, the term $S$, which is not mentioned in the text, should have a subscript, $S_{rt}$, as it is defined over patches $r$ and $t$. Line 198 refers to Table 2 for RA comparison, but Table 2 presents an mIoU comparison. In line 212, there are misused variables, with \\u201ctarget image of $p^i_r$\\u201d actually referring to \\u201ctarget image of $M_t$\\u201d and \\u201c$M_r$\\u201d should be \\u201c$M_t$\\u201d in the sentence \\u201cRepresents the mask of the target image,\\u201d based on context.\", \"The usage of variables in Equation 3 and the corresponding text is inconsistent.\", \"In line 302, it says \\\"$F_i$ and $F_i$\\\" , but these should be two different terms.\"], \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"The paper introduces a novel metric to assess the effectiveness of Vision Foundation Models (VFMs) in representing and segmenting glass-like objects. The authors evaluate the performance of Matcher [1] on this type of data, and conclude that VFMs struggle to accurately represent glass-like objects. To address this limitation, the paper proposes an alignment method to enhance the representations of glass-like objects specifically for Matcher-based downstream tasks. This approach is tested on three distinct datasets, showing some improvements in segmentation performance.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The proposed method is computationally efficient (at \\\"training\\\" time) and requires minimal learnable parameters or hyperparameter tuning.\", \"The comparative study using real-world pairs of images with and without the glass-like objects is original.\"], \"weaknesses\": [\"The overall clarity of the paper needs improvement. I recommend starting with a clear introduction to the problem and a stronger motivation for why it is important to address.\", \"The writing throughout the paper is often difficult to follow, affecting readability.\", \"The proposed method is based on a comparative study using a very limited number of image pairs. It is unclear how the conclusions drawn from such a small sample size can be generalized. This issue is evident, for example, in the dataset dependence of the parameter $\\\\lambda$. The method also seems to overlap with what could be achieved by training an adapter on top of DINOv2 [2] features using the reference images.\", \"The method relies on large Vision Foundation Models (VFMs) such as DINOv2 [2] and SAM [3]. Comparing its performance to standard segmentation approaches, such as a linear segmentation head on top of frozen features, would be useful to justify the high computational cost of the proposed method at inference and in general to put things into perspective.\", \"The overall performance improvements are modest, raising questions about the method\\u2019s practical impact.\"], \"questions\": \"- Can you clarify the distinction between the proposed metric (representation accuracy) and the accuracy achieved by a $k$-NN classifier at the patch level (see Hummingbird [4])? What additional insights does the proposed metric offer?\\n- Why are the results in the tables not reported as averages over the entire datasets?\\n\\n**References**\\n\\n[1] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching. The Twelfth International Conference on Learning Representations (ICLR), 2024.\\n\\n[2] DINOv2: Learning Robust Visual Features without Supervision. Transactions on Machine Learning Research (TMLR).\\n\\n[3] Segment Anything. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023.\\n\\n[4] Towards In-Context Scene Understanding. Advances in Neural Information Processing Systems (NeurIPS), 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"Summary:\\nThis paper proposes a feature modification approach for DINOv2 features aimed at enhancing Matcher\\u2019s ability to segment glass-like objects. It identifies the feature directions corresponding to glass-like appearances by comparing features from glass or mirror regions using a manually labeled subset of 11 image pairs. The approach was evaluated on a subset of either 3 or 4 images from different datasets.\", \"the_reviews\": \"\", \"the_main_strengths_are\": \"1\\uff09this paper addresses an interesting problem and the main weaknesses are: 1\\uff09Poor writing (bad structure, wrong references, missing definition of notations, grammar mistakes). 2\\uff09Insufficient experiments (anecdotal results, minor improvement, missing ablation study). All reviewers find this paper hard to read.\\n\\nDue to insufficient contributions, the AC agrees with the reviewers and does not recommend accepting it at this conference.\\nThe authors are encouraged to improve this work by making more substantial contributions to other venues.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers recognized that the main strengths are: 1\\uff09this paper addresses an interesting problem and the main weaknesses are: 1\\uff09Poor writing (bad structure, wrong references, missing definition of notations, grammar mistakes). 2\\uff09Insufficient experiments (anecdotal results, minor improvement, missing ablation study). All reviewers find this paper hard to read.\\n\\nThe authors did not provide feedback and the issues remain.\"}",
"{\"summary\": \"Although current VLMs provide feature representations that can be adapted to downstream tasks, these features are less effective on glass-like object segmentation task. This paper proposes a new metric called representation accuracy and a simple method for segmenting glass-like objects. Specifically, the main idea is to utilize the visual properties of target objects to find representation dimensions which dominate in recognizing them. Given such information, specific representations are extracted regarding these target objects.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1) A new metric called representation accuracy is defined to compute the representation accuracy of a specific vision model. This metric is used to test with DINOv2 on glass-like datasets, showing that current VLMs are less effective in segmenting these glass-like objects.\\n\\n2) A new method is proposed to utilize the visual properties of objects to extract the most important feature dimensions to achieve better representations. It takes no extra computation or any other training. \\n\\n3) The experiments are conducted on three datasets showing the efficacy of the proposed method on glass-like segmentation task.\", \"weaknesses\": \"1) Regarding the definition of representation accuracy in Equation 2, what is the definition mask of the target (M_t) and reference image (M_r)? The paper lacks of detailed explanation of defining the masks and the way to compute them.\\n\\n2) In methodology, how to find the image pairs (comparative images) that show semantic similarity? Do you define the comparative images as semantically similar with slight visual differences?\\n\\n3) What is the definition of subtractive comparison among the features? Any math equation referring to it?\\n\\n4) In the captions of Figure 3, is it possible to provide further explanation on the \\\"interior and exterior aspects of the mirrored scenes\\\"?\", \"questions\": \"Please refer to the questions listed in \\\"Weaknesses\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
EPHsIa0Ytg | Improved Approximation Algorithms for $k$-Submodular Maximization via Multilinear Extension | [
"Huanjian Zhou",
"Lingxiao Huang",
"Baoxiang Wang"
] | We investigate a generalized form of submodular maximization, referred to as $k$-submodular maximization, with applications across the domains of social networks and machine learning. In this work, we propose the multilinear extension of $k$-submodular functions and unified Frank-Wolfe-type frameworks based on that. This continuous framework accommodates 1) monotone or non-monotone functions, and 2) various constraint types including matroid constraints, knapsack constraints, and their combinations. Notably, we attain an asymptotically optimal $1/2$-approximation for monotone $k$-submodular maximization problems with knapsack constraints, surpassing previous $1/3$-approximation results, and a factor-$1/3$ approximation for non-monotone $k$-submodular maximization problems with knapsack constraints and matroid constraints which outperforms previous $0.245$-approximation results. The foundation for our analysis stems from new insights into specific linear and monotone properties pertaining to the multilinear extension. | [
"$k$-submodular maximization",
"approximation algorithm",
"$k$-multilinear extension"
] | Accept (Spotlight) | https://openreview.net/pdf?id=EPHsIa0Ytg | https://openreview.net/forum?id=EPHsIa0Ytg | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"ultbytdJaA",
"pFhGvDmmqc",
"oRjMO7wcBL",
"nkXOBLf2pn",
"Yo8BEeDDmR",
"XYtJrHp3tu",
"XDCfGELt0N",
"WiOxFlvz7B",
"Ujg5Giw7ij",
"SMwiZhBduY",
"P01Y3jFZb8",
"Ms6hual895",
"KsCJDtt5u1"
],
"note_type": [
"official_comment",
"decision",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1732193738105,
1737523567818,
1734741552834,
1732558914188,
1732194094576,
1732747713855,
1732194857779,
1730404316755,
1732194472906,
1733094407968,
1731349012184,
1730849088339,
1731046585793
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3295/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3295/Area_Chair_6hXW"
],
[
"ICLR.cc/2025/Conference/Submission3295/Reviewer_pvDA"
],
[
"ICLR.cc/2025/Conference/Submission3295/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3295/Reviewer_Tvbp"
],
[
"ICLR.cc/2025/Conference/Submission3295/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3295/Reviewer_HNQi"
],
[
"ICLR.cc/2025/Conference/Submission3295/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3295/Reviewer_HNQi"
],
[
"ICLR.cc/2025/Conference/Submission3295/Reviewer_Tvbp"
],
[
"ICLR.cc/2025/Conference/Submission3295/Reviewer_t6jy"
],
[
"ICLR.cc/2025/Conference/Submission3295/Reviewer_pvDA"
]
],
"structured_content_str": [
"{\"comment\": \"Thank you for your positive feedback. We are glad that you appreciate the technical novelty of our work.\\n\\n\\n> Notation departs substantially from most of the other $k$-submodular papers ...could have been introduced in a more intuitive way...\\n\\nThank you for the suggestion. It is indeed important to make the notations more friendly to the audience who are already familiar with the other definition of $k$-submodular functions (as noted in the footnote on Page 1). We have revised our manuscript (see [revised version](https://openreview.net/pdf?id=EPHsIa0Ytg)), where we highlight the existence of an equivalent definition of $k$-submodular in the introduction section. We also add this definition and explain its equivalence to our definition (Eq. (1)) in the updated Appendix A.2. \\n\\n\\n> It seems like non-monotone $k$-submodular optimization just uses monotone methods with a partial monotonicity property implied by $k$-submodular. But these results don't appear to be tight, and the problem seems to be much less well understood. Can the authors shed any light on this?\\n\\nIt is indeed important to understand the gap between our result and negative results. We do have some intuitions for why the results for non-monotone $k$-submodular optimization are not tight.\\n\\n- First, we note that even for $k=1$, there remains a longstanding open problem in closing the gap between the best known 0.401-approximation algorithm achieved by a variant of continuous greedy [Buchbinder et al., 2024] and the 0.478 inapproximability result for the non-monotone submodular optimization problem. Since $k$-submodular optimization generalizes submodular optimization, we anticipate that achieving tight results for the non-monotone $k$-submodular case will be at least equally (or even more) challenging.\\n\\n\\n- Second, most of the recent advances in continuous methods for submodular optimization concentrate on continnuous greedy and FW-type methods in the monotone case. In fact, FW acts as a continuous analogue of the greedy algorithm, mimicking its selection of high marginal gain directions and inheriting its tight $(1\\u22121/e)$-approximation effectiveness. As a comparision, the marginal gain is hard to estimate for non-monotone case, and the linear surrogate often fails to capture the non-linear interactions between elements. This indicates that it is less likely to achieve tightness through FW-type algorithm.\\n\\n\\n[Buchbinder et al., 2024] Constrained Submodular Maximization via New Bounds for DR-Submodular Functions, Niv Buchbinder and Moran Feldman, STOC 2024.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}",
"{\"metareview\": \"The paper studies the problem of maximizing a k-submodular objective function subject to constraints such as cardinality, matroid, and knapsack constraints. The problem is a generalization of submodular maximization, which is the special case k=1, with additional applications. This work is the first to extend the well-known approach based on the multilinear extension and continuous optimization from submodular functions to k-submodular functions. The resulting algorithms achieve improved approximation guarantees in several settings.\\n\\nThe reviewers appreciated the main contributions and generally found the theoretical contribution to be strong and novel. The improvement in the approximation for certain constraints such as multiple knapsack constrains is significant. A notable weakness raised by the reviewers is that the contribution is primarily of theoretical interest, since the running time is prohibitive in practice. Overall, the paper makes a valuable theoretical contribution to the area of submodular maximization.\", \"additional_comments_on_reviewer_discussion\": \"In addition to the main concern that the algorithms are inefficient, the reviews also raised some concerns regarding the exposition and the practical motivation for studying k-submodular maximization problems. The authors revised the paper to address these comments.\"}",
"{\"comment\": \"Thank you for the explanation. Given that, I've increased my score.\"}",
"{\"comment\": \"Thank you for your positive and encouraging feedback. We appreciate the recognition of our work\\u2019s contribution and will address the questions raised in your comments.\\n\\n> It\\u2019s unclear how relevant this problem is for the ML community\\n\\nWhile submodular maximization may be more famous in the ML community, many applications of it could be extended to $k$-submodular maximization. One example is diversity, where the selection needs to balance multiple sources.\\n* Feature selection: In machine learning, feature selection is the process of identifying a subset of features that are most relevant to a given task. $k$-submodular maximization can be used to find a diverse set of features that maximizes the performance of a model.\\n* Active learning: Active learning is a technique for selecting the most informative data points to label, which can help to reduce the cost of labeling data. $k$-submodular maximization can be used to select a diverse set of data points that are likely to provide the most information about the underlying model.\\n* Recommendation systems: Recommendation systems are used to provide personalized recommendations to users. $k$-submodular maximization can be used to select a diverse set of items that are likely to be of interest to a given user.\\n\\nThere are many other applications. When we mention sensor placement in the manuscript, this is related to data acquisition in machine learning. Determining the optimal placement of sensors to collect the most informative data for training a model. It is also useful in anomaly detection, where one trategically places monitoring agents within a network to maximize the chances of detecting anomalies. Meanwhile, $k$-submodular optimization is useful for resource allocation tasks, which is relevant in several ML senarios. In distributed computing, one may assign tasks to a limited number of computing nodes to optimize ML training performance and energy consumption. In cloud ML, one may allocate different types of virtual machines or containers to meet varying workloads while minimizing costs.\\n\\nWe have added this discussion in the revised version; see Appendix A.1 in [revised manuscript](https://openreview.net/pdf?id=EPHsIa0Ytg).\\n\\n> Could you compare your results with those for submodular maximization to clarify the gap between this work and existing results for multilinear extension in submodular maximization?\\n\\nBelow we clarify the distinction between our work and existing results on the multilinear extension in submodular maximization, which we have addressed in lines 253\\u2013260. \\n\\nSpecifically, for submodular maximization, the results for **monotone cases** using multilinear extension-based algorithms are as follows:\\n1. A $(1-1/e-\\\\varepsilon)$-approximation for $O(1)$ knapsacks;\\n2. A $(1-1/e-\\\\varepsilon)$-approximation for a single matroid;\\n3. A $(0.6(1-1/e)/b - \\\\varepsilon)$-approximation for the intersection of $O(1)$ knapsacks and $b$ matroids, all achievable in polynomial time with respect to $n$.\\n\\nFor **non-monotone cases**, these algorithms achieve:\\n1. A $(0.401-\\\\varepsilon)$-approximation for $O(1)$ knapsacks;\\n2. A $(0.401-\\\\varepsilon)$-approximation for a single matroid;\\n3. A $(0.24/b - \\\\varepsilon)$-approximation for the intersection of $O(1)$ knapsacks and $b$ matroids, also in polynomial time with respect to $n$.\\n\\nIn comparison, for **$k$-submodular maximization**, multilinear extension-based algorithms yield the following results for **monotone cases**:\\n1. A $(1/2-\\\\varepsilon)$-approximation for $O(1)$ knapsacks;\\n2. A $(1/2-\\\\varepsilon)$-approximation for a single matroid;\\n3. A $(0.3/b - \\\\varepsilon)$-approximation for the intersection of $O(1)$ knapsacks and $b$ matroids, all achievable in polynomial time with respect to $n$ and $1/\\\\varepsilon$.\\n\\nFor **non-monotone cases**, the results are:\\n1. A $(1/3-\\\\varepsilon)$-approximation for $O(1)$ knapsacks;\\n2. A $(1/3-\\\\varepsilon)$-approximation for a single matroid;\\n3. A $(0.2/b - \\\\varepsilon)$-approximation for the intersection of $O(1)$ knapsacks and $b$ matroids, also in polynomial time with respect to $n$.\"}",
"{\"comment\": \"Thank you for the response. I will maintain my score.\"}",
"{\"comment\": \"We thank the reviewer for their thoughtful feedback. We are encouraged by the recognization of the importance of the $k$-submodular maximization problem, the clarity of our proofs, and the unified algorithm achieving better approximation ratios. Below, we address the questions raised.\\n\\n>...the k-submodular functions can be reduced to general submodular functions under partition matroid. Given this reduction, the idea of defining the multi-linear extension for $k$-submodular functions is less novel\\n\\nWe believe it is non-trivial to reduce non-negative $k$-submodular functions to general non-negative submodular functions under a partition matroid. The statement in [Iwata et al., 2016]\\u2014\\\"The $k$-submodular function maximization problem is closely related to the submodular function maximization with a partition matroid constraint\\\"\\u2014does not imply such a reduction exists. \\n\\nOne possible attempt of reduction is as follows. We define the domain as $\\\\\\\\bar{\\\\Delta}\\\\_k\\\\^n\\\\subseteq \\\\\\\\{0,1\\\\\\\\}\\\\^{nk}$ where $\\\\bar{\\\\Delta}\\\\_k = \\\\\\\\{x \\\\in \\\\{0,1\\\\}\\\\^{k} : \\\\sum\\\\_{j=1}\\\\^k x_j \\\\leq 1\\\\\\\\}$, and define a function $\\\\bar{f}: \\\\bar{\\\\Delta}\\\\_k\\\\^n\\\\to \\\\mathbb{R}$ as $\\\\bar{f}(S) = f(\\\\mathbf{s})$, where $\\\\mathbf{s}$ is defined as $\\\\mathbf{s}\\\\_i = j$ if there exists a unique element $e\\\\_{i,j} \\\\in S$ and $\\\\mathbf{s}_i = 0$ otherwise. However, we may not be able to extend the domain of such a submodular function to $\\\\\\\\{0,1\\\\\\\\}^{nk}$ without violating non-negativity or monotonicity, even for the simplest case of $k = 2$. Specifically:\\n1. As shown in Lemma 2 of [Singh et al., 2012], there exists a non-negative 2-submodular function, for which no extension is both non-negative and submodular.\\n2. Furthermore, Lemma 3 of [Singh et al., 2012] demonstrates that there exists a monotone, non-negative 2-submodular function, for which no extension is non-negative, monotone, and submodular.\\n\\nThese results highlight that $k$-submodular functions cannot generally be reduced to submodular functions under partition matroids, and the development of a multilinear extension tailored specifically for $k$-submodular functions remains new.\\n\\nWe have added this explaination in the revised version; see Appendix A.3 in our [updated manuscript](https://openreview.net/pdf?id=EPHsIa0Ytg).\\n\\n\\n[Iwata et al., 2016] Improved Approximation Algorithms for k-Submodular Function Maximization, Satoru Iwata, Shin-ichi Tanigawa, Yuichi Yoshida.\\n\\n[Singh et al., 2012] On Bisubmodular Maximization. Ajit P. Singh, Andrew Guillory, Jeff Bilmes.\\n\\n\\n\\n>...give some intuition of why the algorithm can't achieve a better bound (e.g. $1\\u22121/e$) by explaining some key aspects of the proof in Lemma 3.3?\\n\\nThe main reason why the approximation ratio is not as good as $1\\u22121/e$ for $k$-submodular maximization is due to the change of domain from cube $[0,1]^{nk}$ to a corner $\\\\Delta_k^n$. To illustarte this, we first recap why FW can find $1-1/e$ approximate solution for submodular optimization. To prove the approximate ratio, we link the current solution $x^k$ and optimal solution $x^\\\\star$ by auxillary point $x^\\\\star \\\\vee x^k$ by coordinate-wise maximum operation, then showing one step improvement as $F(x^{k+1}) - F(x^\\\\star)\\\\gtrsim (1-\\\\delta)(F(x^k)- F(x^\\\\star))$ with step-size $\\\\delta$ and solution $x^k$ at $k$-step of FW method. By accumulation, we have $F(x^{K}) \\\\gtrsim (1-1/e) F(x^\\\\star)$. However, the function value of the auxillary point $x^\\\\star \\\\vee x^k$ by coordinate-wise maximum operation may not have definition for $k$-submodular. Instead, in the proof of Lemma 3.3, we use new auxilary point $o(t) = x(t) + (1 \\u2212 t)o^\\\\star$ that lies in domain $\\\\Delta_k^n$, and show one step improvement $F(x(t + \\\\delta)) \\u2212 F(x(t)) \\\\gtrsim F(o(t)) \\u2212 F(o(t + \\\\delta))$, which results in $F(x^{K}) \\\\gtrsim 1/2 F(x^\\\\star)$.\\n\\n\\n> Would using a different concentration inequality, such as Hoeffding's inequality instead of the Chernoff bound, potentially improve the query complexity?\\n\\nWe agree that using alternative concentration inequalities, such as Hoeffding's inequality (instead of the Chernoff bound), might improve the query complexity during the first phase of maximizing the continuous function. However, the overall query complexity of the algorithm could be dominated by the rounding procedure, which remains polynomial even if we have an improved concentration. For instance, finding a continuous solution under a knapsack constraint requires $O(\\\\frac{kn^6\\\\log\\\\frac{n\\\\varepsilon}{\\\\eta}}{\\\\varepsilon^3})$ queries, while the rounding process to obtain a discrete solution necessitates an additional $O(k^{poly(1/\\\\varepsilon)} n^{poly(1/\\\\varepsilon)})$ queries. Additionally, our primary focus in this work is on achieving the best approximation ratio rather than minimizing query complexity. We believe that exploring such refinements to improve query complexity in the first phase will be an interesting direction if the first phase query becomes dominant in some future settings.\"}",
"{\"summary\": \"This paper addresses the problem of maximizing $k$-submodular functions under various constraints. By generalizing the multi-linear extension commonly used for standard submodular functions to apply to $k$-submodular functions, the authors introduce unified Frank-Wolfe-type frameworks to tackle these problems in a continuous domain. The proposed algorithm attains an approximation ratio of $1/2$ for monotone $k$-submodular optimization and $1/3$ for nonmonotone $k$-submodular optimization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The problem of k-submodular maximization is important in many fields of machine learning and aligns well with the conference\\u2019s broader focus.\\n2. The paper provides solid and clear proof analysis and proposes a unified algorithm that achieves better approximation ratios compared with the previous method.\\n3. The proof sketch of Lemma 3.3 is presented clearly and highlights the technical novelty of the proof.\", \"weaknesses\": \"1. As has been highlighted in [1], the $k$-submodular functions can be reduced to general submodular functions under partition matroid. Given this reduction, the idea of defining the multi-linear extension for $k$-submodular functions is less novel.\\n2. Since this is a theoretical work with no experimental results, the primary contributions should ideally lie in offering new technical skills or insights to the submodular optimization community. However, most of the proof analysis appears standard. \\n\\n-[1]. Satoru Iwata, Shin-ichi Tanigawa, and Yuichi Yoshida. Improved approximation algorithms for\\nk-submodular function maximization.\", \"questions\": \"The paper is clearly written. My main question concerns the proof of Lemma 3.3. I understand that the best-known lower bound for the monotone case is $1/2$, but could you please give some intuition of why the algorithm can't achieve a better bound (e.g. $1-1/e$) by explaining some key aspects of the proof in Lemma 3.3? Additionally, I noticed that the query complexity of the algorithm, even for the monotone case, is somewhat inefficient. Would using a different concentration inequality, such as Hoeffding's inequality instead of the Chernoff bound, potentially improve the query complexity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for reviewing our manuscript. We are encouraged by your recognition of the importance, novelty, and clarity of our work.\\n\\n> Is the definition of $k$-submodular presented in the paper the typical definition? And if it is not, is it clearly equivalent? \\n\\nWe appreciate the question regarding the definition of $k$-submodularity. Yes, both the definition in our paper (see Eq. (1)) and the one from [Sakaue (2017)] (as noted in the footnote on Page 1) are typical definitions for $k$-submodular and they are equivalent. To clarify, let us outline the equivalence with the definition on Page 2 of [Sakaue (2017)]:\\n\\n\\n**Definition 1 (from [Sakaue (2017)])**\\n- The domain is represented as $(k+1)^V := \\\\{(X_1, \\\\ldots, X_k) \\\\mid X_i \\\\subseteq V, X_i \\\\cap X_j = \\\\emptyset \\\\text{ for } i \\\\neq j\\\\}$, where $V$ is a ground set.\\n- A function $f: (k+1)^V \\\\to \\\\mathbb{R}$ is called $k$-submodular if for any $x = (X_1, \\\\ldots, X_k)$ and $y = (Y_1, \\\\ldots, Y_k)$, it satisfies:\\n$f(x) + f(y) \\\\geq f(x \\\\cap y) + f(x \\\\cup y),$\", \"where\": \"- $(\\\\min_0(s, t))_i = \\n \\\\begin{cases} \\n 0 & \\\\text{if } s_i t_i \\\\neq 0 \\\\text{ and } s_i \\\\neq t_i, \\\\\\\\\\\\\\\\\\n \\\\min(s_i, t_i) & \\\\text{otherwise},\\n \\\\end{cases}$\\n - $(\\\\max_0(s, t))_i = \\n \\\\begin{cases} \\n 0 & \\\\text{if } s_i t_i \\\\neq 0 \\\\text{ and } s_i \\\\neq t_i, \\\\\\\\\\\\\\\\\\n \\\\max(s_i, t_i) & \\\\text{otherwise}.\\n \\\\end{cases}$\\n\\n#### Mapping Between Domains:\\n- In Definition 1, the domain is $(k+1)^V$, which represents $k$ disjoint subsets $X_1, \\\\ldots, X_k$ of a ground set $V$.\\n- In Definition 2, the domain is $\\\\{0, 1, \\\\ldots, k\\\\}^n$, where each element $i \\\\in [n]$ belongs to one of $k$ disjoint sets, represented by its label $1, \\\\ldots, k$, or is null ($0$).\\n\\nThe two domains are equivalent since labeling each element in $V$ corresponds directly to assigning it to a subset $X_i$ (or $0$ if it belongs to none).\\n\\n#### Operations ($\\\\cup, \\\\cap$ vs. $\\\\min_0, \\\\max_0$):\\n- **Intersection ($\\\\cap$ in Definition 1 and $\\\\min_0$ in Definition 2):**\\n - For Definition 1, $(x \\\\cap y)_i = X_i \\\\cap Y_i$.\\n - For Definition 2, $(\\\\min_0(s, t))_i = \\\\min(s_i, t_i)$ when $s_i, t_i \\\\in \\\\{0, i\\\\}$, which aligns with the intersection of sets for corresponding labels. If $s_i$ and $t_i$ are different and nonzero, the result is $0$, corresponding to the empty intersection.\\n\\n- **Union ($\\\\cup$ in Definition 1 and $\\\\max_0$ in Definition 2):**\\n - For Definition 1, $(x \\\\cup y)\\\\_i = X_i \\\\cup Y_i \\\\setminus \\\\cup_{j \\\\neq i}(X_j \\\\cup Y_j)$, which ensures disjointness.\\n - For Definition 2, $(\\\\max_0(s, t))_i = \\\\max(s_i, t_i)$ when $s_i, t_i \\\\in \\\\{0, i\\\\}$, which aligns with the union of sets. If $s_i$ and $t_i$ are different and nonzero, the result is $0$, ensuring disjointness.\\n\\n#### Submodular Inequality:\\n- Both definitions use the inequality:\\n$f(x) + f(y) \\\\geq f(x \\\\cup y) + f(x \\\\cap y),$\\n which holds in both formulations because the operations $\\\\cup, \\\\cap$ in Definition 1 are equivalent to $\\\\max_0, \\\\min_0$ in Definition 2, and the domains and function mappings are equivalent.\\n\\n\\nWe have revised our manuscript (see [revised manuscript](https://openreview.net/pdf?id=EPHsIa0Ytg)), where we highlight the existence of this equivalent definition of $k$-submodular in the introduction section. We also add this definition and explain its equivalence to our definition (Eq. (1)) in the updated Appendix A.2.\"}",
"{\"title\": \"Reviewer Comment\", \"comment\": \"Thanks for the clarification! I don't have any further concerns.\"}",
"{\"summary\": \"This paper studies monotone and non-monotone k-submodular maximization subject to various constraint systems. Prior work on this topic has studied combinatorial algorithms only -- this paper introduces a multilinear extension for k-submodular functions. Despite the power of the continuous approach for 1-submodular functions, it hasn't been applied to the k-submodular case. Via this method, the authors obtain improved approximation ratios in several constraint regimes, most notably O(1) knapsacks, where they achieve an asymptotically tight ratio.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"In some sense, developing continuous algorithms for the k-submodular case seems like a natural generalization of the 1-submodular methods. However, there were a number of challenges the authors had to adress, which is likely why prior work had avoided this direction. Everything looks easy in hindsight.\", \"Specifically, the way the authors addressed a certain feasibility issue (discussed on page 4) by shifting the optimization target from o^* is relatively novel. It is a little surprising to me, not that this method obtains a constant factor, but that it gets the optimal ratio in some settings (O(1) knapsacks). Other generalizations (such as rounding, approximate linearity, etc.) seem more straightforward.\", \"For non-monotone functions, the ratios aren't tight, but significant improvements in state-of-the-art are obtained.\"], \"weaknesses\": [\"Notation departs substantially from most of the other k-submodular papers I've seen. It took a bit of thought to see that everything is equivalent. Likely this could have been introduced in a more intuitive way that would make the paper more accessible.\", \"The main text is really an extended abstract, with no proofs. And instead, arguments why the methods are interesting. In general, I would like to see at least some part of the technical arguments condensed in the main text -- but this is more a critique of the publication / reviewing model. Often, mistakes are found later (obviously didn't have time to check the 30-page appendix in full detail).\", \"Algorithms are of theoretical interest only, of order n^{poly(1/\\\\epsi}. It is difficult to imagine a scenario where one would want to try to implement these algorithms. Thus, their ability to tackle big data k-submodular instances of problems relevant to the ML community is limited to non-existent.\"], \"questions\": \"It seems like non-monotone k-submodular optimization just uses monotone methods with a partial monotonicity property implied by k-submodular. But these results don't appear to be tight, and the problem seems to be much less well understood. Can the authors shed any light on this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper considers the problem of maximization of $k$-submodular functions, where elements in the ground set can be selected into one of $k$ sets. $k$-submodular functions are a generalization of ordinary submodular set functions, which correspond to the special case $k=1$. For the maximization of ordinary submodular set functions, the continuous multilinear extension has been used in order to yield algorithms with better approximation guarantees compared to combinatorial approaches such as greedy algorithms. In contrast, only combinatorial approaches have previously been proposed for $k$-submodular function maximization. Inspired by the effectiveness of continuous approaches in the ordinary submodular function case, this paper extends the definition of multilinear extension to $k$-submodular functions, and proposes algorithms using this multilinear extension to achieve better approximation guarantees that those existing for a variety of constraints (see Table 1). Their algorithms use Frank-Wolfe types of methods, which is different than continuous algorithms used for ordinary submodular functions.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"$k$-submodular functions have been a topic of recent interest, and this paper makes an important contribution by introducing an extended multilinear extension and proposing algorithms which have better approximation guarantees using the multilinear extension.\", \"They developed algorithms with theoretical performance guarantees stronger than those existing in the literature. Further, at least some of their guarantees matched the best known lower bound of Iwata et al. (2016).\", \"They used Frank-Wolfe type of methods instead of simply extending the most standard algorithms using the multilinear extension for standard submodular functions such as that of Calinescu et al. [2011] (see reference below). They further explain the technical challenges that arise when simply trying to extend those standard approaches. They also had to use an alternative method of rounding. So their theoretical results do not seem trivial to me.\", \"Paper is very well-written and clear.\", \"Calinescu, Gruia, et al. \\\"Maximizing a monotone submodular function subject to a matroid constraint.\\\" SIAM Journal on Computing 40.6 (2011): 1740-1766.\"], \"weaknesses\": [\"There is no experimental evaluation of the algorithms, and in fact the algorithms might not be very practical to implement. Multilinear extension algorithms are often much slower and less practical compared to combinatorial ones for ordinary submodular functions, so one would probably expect that to also be the case in this setting.\", \"This paper extends and combines ideas from many papers in the literature. For example, the ordinary submodular function multilinear extension is extended to the $k$-submodular case, and in addition Frank-Wolfe style algorithms have been used for ordinary submodular functions. One con then might be that there is not a huge amount of novelty, but I think it is still plenty sufficient for publication.\"], \"questions\": [\"Is the definition of $k$-submodular presented in the paper the typical definition? And if it is not, is it clearly equivalent? I recall seeing different definitions of $k$-submodular in related work.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper addresses the k-submodular maximization problem under matroid and knapsack constraints. k-submodular maximization is a generalization of submodular maximization where each element of an n-dimensional vector can take values from {0, ..., k}. The objective is to maximize a submodular function defined on these vectors, subject to given constraints.\\n\\nThe authors apply the multilinear extension technique\\u2014previously used in submodular maximization but novel to k-submodular maximization\\u2014achieving an improved approximation factor for this problem, particularly under the d-knapsack constraint. They have improvements for other constraints too, but most improvements are minor except for this specific constraint.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The main strength lies in applying the multilinear extension to k-submodular maximization, which could inspire similar approaches in related problems. Additionally, the improvement in the approximation factor under the d-knapsack constraint is noteworthy.\", \"weaknesses\": \"Although the algorithm provides a good approximation factor, its practical applicability is limited due to potentially high running times, and no experimental results are provided to assess real-world performance. Additionally, while the first paragraph of introduction mentions applications, it\\u2019s unclear how relevant this problem is for the ML community, as it\\u2019s not as widely studied as submodular maximization, which may hold more appeal for ML research.\\n\\n## Minor comments:\\n- On page 5, you mention that Niu et al. achieved a 1/3-approximation ratio for the non-monotone case under a single matroid constraint, but a worse result appears on page 2, Table 1.\\n- On page 2, the notation for min_0 and max_0 is inconsistent\\u2014sometimes with 0 as a subscript, other times appearing below them. Please standardize.\", \"questions\": \"Could you compare your results with those for submodular maximization to clarify the gap between this work and existing results for multilinear extension in submodular maximization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
EP6n8LCEK6 | Understanding Prejudice and Fidelity of Diverge-to-Converge Multi-Agent Systems | [
"Zhen Tan",
"Song Wang",
"Shyam Marjit",
"Zihan Chen",
"Yinhan He",
"Xinyu Zhao",
"Pingzhi Li",
"Jundong Li",
"huan liu",
"Tianlong Chen"
] | Large language model (LLM) agents have demonstrated substantial potential across various tasks, particularly in multi-agent systems. Among these, \textit{Diverge-to-Converge} (D2C) frameworks stand out for their ability to iteratively diversify and converge intermediate thoughts to improve problem-solving. In this paper, we conduct a comprehensive study on the \textit{\textbf{prejudice}} and \textit{\textbf{fidelity}} of typical D2C frameworks, including both model-level and society-level frameworks.
\ding{182} In the \textit{prejudice} section, we uncover an inherent \textit{confirmation bias} in D2C systems, which not only leads to suboptimal performance, but also amplifies social biases, such as gender discrimination and political partisanship. Surprisingly, we find that by reframing open-ended problems into controlled initialized problems, this bias can be leveraged to foster more equitable and effective agent interactions, ultimately improving performance.
\ding{183} In the \textit{fidelity} section, we explore the scaling laws of D2C frameworks at different granularities, revealing that increasing the number of agents enhances performance only when the system is not yet saturated---such as in complex tasks or with weaker agents. In saturated scenarios, however, adding more agents can degrade performance.
To facilitate further study, we develop \texttt{APF-Bench}, a benchmark specifically designed to evaluate such inherent weaknesses of D2C frameworks.
We hope our findings offer instructional insights into the strengths and limitations of D2C multi-agent systems, offering guidance for developing more robust and effective collaborative AI systems. | [
"Large language model agents",
"Multi-Agent System",
"Benchmark"
] | Reject | https://openreview.net/pdf?id=EP6n8LCEK6 | https://openreview.net/forum?id=EP6n8LCEK6 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zUXPRw8m4N",
"zLqfG3eZZH",
"yY27PPqzg4",
"wWqvK8HKIr",
"vJa4fsXY4x",
"qKLdhZT6YP",
"mmka0zv5XS",
"ko8CWc6P2B",
"jJVQ48wHve",
"hBHW86bGA5",
"fRsj1rPqjP",
"etZ2jBIeZX",
"XYU0lBvj3A",
"XT6iAGRBLL",
"UkgCXQIMQN",
"ShWJS4rKIA",
"QoynjxEMle",
"BywntzFgw9",
"Bo8pKMsRgz",
"9neFuFva3B",
"3pUIi6WfYD",
"1fozFtZ70h"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1737523998175,
1731117818813,
1731124171723,
1732749550105,
1732550257072,
1730761325444,
1732750083272,
1732395046735,
1732394690819,
1733141702989,
1732395404022,
1732405215177,
1732394394461,
1730026131386,
1732750263264,
1732750213825,
1732409260517,
1734620426095,
1732405087106,
1732750460169,
1732612615200,
1732611562460
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9661/Reviewer_EH5M"
],
[
"ICLR.cc/2025/Conference/Submission9661/Reviewer_z9FH"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Reviewer_QY19"
],
[
"ICLR.cc/2025/Conference/Submission9661/Reviewer_QY19"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Reviewer_xVhp"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Area_Chair_o5zJ"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9661/Reviewer_xVhp"
],
[
"ICLR.cc/2025/Conference/Submission9661/Reviewer_EH5M"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"The authors investigate reasoning pathologies of a certain class of multi-agent LLM systems, Diverge-to-Converge (D2C) frameworks, both at the model- and society-level. The authors identify an inherent confirmation bias in D2C systems but results in social biases and task underperformance which can be alleviated if open-ended questions are re-phrased as binary. The authors then study the scaling laws of D2C frameworks, finding that more agents does only result in performance improvements if the system is not yet saturated but can otherwise even degrade performance. The authors suggest remedies for both these pathologies and release APF-Bench to specifically evaluate these weaknesses.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Very timely and can provoke thought - e.g. trade-off bias/compute\", \"Robust evaluation across multiple datasets\", \"the idea to use reframing to tackle biases seems novel\"], \"weaknesses\": [\"Could have discussed a greater variety of biases other than confirmation bias\", \"It isn't clear how questions of real-world importance that are open-ended can always be brought into binary form.\", \"conceptual advances are limited - scaling laws / reframing techniques themselves feel rather incremental\", \"line 216 \\\"menifest\\\"\"], \"questions\": [\"How do you prevent bias in the debate judgements?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper examines the limitations of Diverge-to-Converge (D2C) frameworks in large language model (LLM) agents, focusing on prejudice and fidelity. It reveals a confirmation bias in D2C systems that hampers performance and amplifies social biases, but reframing open-ended problems as binary questions mitigates these effects. The study also shows that increasing the number of agents only improves performance under unsaturated conditions. Additionally, the authors introduce APF-Bench, a benchmark to evaluate these weaknesses, providing insights for building better collaborative AI systems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. It uncovers and addresses *confirmation bias* in D2C frameworks, providing practical solutions to mitigate performance issues and social biases.\\n2. The study examines both*prejudice* and *fidelity*, offering a detailed understanding of D2C frameworks across multiple levels.\\n3. By demonstrating how reframing problems into binary questions improves fairness and effectiveness, the research has real-world applicability.\\n4. The development of *APF-Bench* as a dedicated tool for evaluating D2C systems is a valuable resource for future research.\\n5. The analysis of scaling laws provides essential guidelines for optimizing agent collaboration in different task scenarios.\", \"weaknesses\": \"1. Limited Real-World Testing: The findings might lack generalizability if not tested in diverse, real-world multi-agent scenarios.\\n2. Potential Oversimplification: Reframing problems as binary questions may oversimplify complex tasks, possibly limiting the depth of solutions.\\n3. Scalability Constraints: The performance degradation observed in saturated systems indicates a limitation in scaling D2C frameworks effectively.\\n4. Bias Mitigation Trade-offs: While the approach reduces biases, it may inadvertently introduce new limitations or biases in certain contexts.\", \"questions\": [\"Why do you study D2C frameworks rather than other MAS frameworks? Is D2C a typical and widely adopted MAS framework? What are the incentives behind this choice?\", \"What is the main *contribution in scientificity* that the paper claims? This paper does a lot of evaluation and analysis on different LLMs, but they are the existing ones. Could you provide insights into designing LLMs that can inherently avoid or mitigate confirmation bias? Or, can you give a discussion on the underlying causes of such bias, which could possibly arise at the data level or the pre-training/fine-tuning level instead of solely empirical discovery?\", \"See also Weaknesses for other questions.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"> Q2: What is the main contribution in scientificity that the paper claims? This paper does a lot of evaluation and analysis on different LLMs, but they are the existing ones. Could you provide insights into designing LLMs that can inherently avoid or mitigate confirmation bias? Or, can you give a discussion on the underlying causes of such bias, which could possibly arise at the data level or the pre-training/fine-tuning level instead of solely empirical discovery?\\n\\n**Comment**: \\nThank you for the opportunity to clarify the primary scientific contributions of our paper, particularly in the context of modern multi-agent systems (MAS) and Large Language Models (LLMs). \\n1. **Main Scientific Contribution**: The core contribution of our study is identifying a significant confirmation bias inherent in the widely adopted Diverge-to-Converge (D2C) frameworks within MAS. Our analysis reveals that the encouragement of diverse thinking among agents, a common feature of these frameworks, paradoxically leads to confirmation bias. This insight is crucial as it highlights a fundamental issue in a growing field.\\n\\n2. **Proposed Solution and Its Impact**: To address this bias, our paper presents a simple yet universally effective solution. By implementing a controlled initialization of the tasked question, we reframe the problem in a way that not only enhances the performance of D2C frameworks but also can help to mitigate broader social biases perpetuated by LLMs, such as those related to gender or ideology. This approach leads to better outcomes by aligning the divergent thinking of agents towards a more balanced convergence. \\n\\n3. **Value of the Findings**: Although our research does not introduce new training techniques or LLM architectures, the significance of our findings lies in their generality and practical applicability. By demonstrating how a strategic intervention in problem framing can influence systemic biases, our work contributes a valuable perspective to the field of LLM agent research. This contribution is particularly pertinent as it provides a novel lens through which the community can reassess and refine the operational dynamics of MAS frameworks.\\n\\n4. **Comparison with other LLM Debias Direction**: Thanks for pointing out the point. The studied bias is identified and tackled at the level of agent system, instead of individual LLM. We acknowledge the importance of examining biases at the data and model levels in LLM research. Our work complements these efforts by identifying and addressing biases in MAS frameworks, particularly in how agent interactions can propagate or mitigate biases. We believe that our contributions provide valuable insights that are **orthogonal to, yet supportive of**, the broader goals of reducing bias in AI systems. In the appendix, we will add a section clarifying that our findings are orthogonal to other research examining LLM biases from data or model perspectives. This section will state:\\n\\n``*We acknowledge the importance of examining biases at the data and model levels in LLM research. Our work complements these efforts by identifying and addressing biases in MAS frameworks, particularly in how agent interactions can propagate or mitigate biases. We believe that our contributions provide valuable insights that are orthogonal to, yet supportive of, the broader goals of reducing bias in AI systems.*''\"}",
"{\"comment\": \"I thank the authors for their response and have raised my score.\"}",
"{\"summary\": \"The paper conducts a study on confirmation bias from initial responses in different multi-agent LLM setups and comes up with a technique to prevent this bias (and thus improve benchmark performance) by changing the framing of questions. It then presents very initial work on how multi-agent system performance scales with the number of agents and tokens.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": [\"The paper presents a very comprehensive review of existing multi-agent LLM research and fits in prejudice and fidelity quite well in these settings. This provides useful context to understand the paper\\u2019s key contributions.\", \"The problem reframing method is reasonably novel and the experimental evaluations are comprehensive enough to demonstrate improvements with this method.\", \"It presents initial interesting results around differences in scaling the number of agents/LLM calls versus the number of tokens per generation. This could allow for a lot more future work in multi-agent LLM research.\", \"APF-Bench encompasses other benchmarks and can act as a useful starting point for similar research directions.\"], \"weaknesses\": [\"The paper explores only problem reframing as a bias mitigation strategy. However, not every problem can be converted into a binary problem, and other strategies are not explored at all.\", \"The paper does not perform evaluations on any open source models.\", \"The refinement strategy for datasets could introduce selection bias and skew results. I would be interested in seeing results across a random subset of the test set on the benchmarks used.\", \"The paper spends its first 5.5 pages providing a background on the problem and multi-agent LLM settings. This takes away from its key contributions, which are limited to the problem reframing strategy and very introductory work on scaling laws around fidelity. Section 6.2 is extremely limited and does not back up its claims with linked experiments.\", \"The appendix presents examples of model outputs, however it does not provide examples of inputs to the models (especially in the problem reframing setting). I\\u2019ve posed questions around these examples in Questions section of my review.\"], \"questions\": [\"Page 18, Case 2, GSM8k: Could the authors provide complete inputs to the models and their outputs for each iteration?\", \"Is there a hypothesis around why the results hold and such biases occur in language models? Are there reasonable tests that can be conducted around this?\", \"Could there exist better reframing techniques? Why was the binary reframing technique selected? Will it work for all tasks?\", \"Update - these questions have been answered by the authors.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your thoughtful feedback and the opportunity to discuss the contributions of our work further, particularly in the context of modern multi-agent systems (MAS) and Large Language Models (LLMs).\\n\\n1. **Main Scientific Contribution**: The core contribution of our study is identifying a significant confirmation bias inherent in the widely adopted Diverge-to-Converge (D2C) frameworks within MAS. Our analysis reveals that the encouragement of diverse thinking among agents, a common feature of these frameworks, paradoxically leads to confirmation bias. This insight is crucial as it highlights a fundamental issue in a growing field.\\n\\n2. **Proposed Solution and Its Impact**: To address this bias, our paper presents a simple yet universally effective solution. By implementing a controlled initialization of the tasked question, we reframe the problem in a way that not only enhances the performance of D2C frameworks but also can help to mitigate broader social biases perpetuated by LLMs, such as those related to gender or ideology. This approach leads to better outcomes by aligning the divergent thinking of agents towards a more balanced convergence. \\n\\n3. **Value of the Findings**: Although our research does not introduce new training techniques or LLM architectures, the significance of our findings lies in their generality and practical applicability. By demonstrating how a strategic intervention in problem framing can influence systemic biases, our work contributes a valuable perspective to the field of LLM agent research. This contribution is particularly pertinent as it provides a novel lens through which the community can reassess and refine the operational dynamics of MAS frameworks.\\n\\n4. **Comparison with other LLM Debias Direction**: Thanks for pointing out the point. The studied bias is identified and tackled at the level of agent system, instead of individual LLM. We acknowledge the importance of examining biases at the data and model levels in LLM research. Our work complements these efforts by identifying and addressing biases in MAS frameworks, particularly in how agent interactions can propagate or mitigate biases. We believe that our contributions provide valuable insights that are **orthogonal to, yet supportive of**, the broader goals of reducing bias in AI systems. In the appendix, we will add a section clarifying that our findings are orthogonal to other research examining LLM biases from data or model perspectives. This section will state:\\n\\n``*We acknowledge the importance of examining biases at the data and model levels in LLM research. Our work complements these efforts by identifying and addressing biases in MAS frameworks, particularly in how agent interactions can propagate or mitigate biases. We believe that our contributions provide valuable insights that are orthogonal to, yet supportive of, the broader goals of reducing bias in AI systems.*''\"}",
"{\"comment\": \"> W1: The paper explores only problem reframing as a bias mitigation strategy. However, not every problem can be converted into a binary problem, and other strategies are not explored at all.\\n\\n**Comment**: Replied in the general response.\\n\\n---\\n\\n\\n> W2: The paper does not perform evaluations on any open-source models.\\n\\n**Comment**: Thank you for your valuable feedback. We acknowledge the importance of evaluating open-source models to increase the reproducibility and accessibility of our findings. While the current study primarily utilizes proprietary models like GPT-4o and Gemini due to their advanced capabilities and relevance to state-of-the-art D2C systems, we recognize the potential benefits of including open-source models in future work. In subsequent iterations of this research, we plan to incorporate evaluations of open-source models such as LLaMA and Falcon, particularly to ensure broader applicability and transparency of our approach.\\n\\n---\\n\\n> W3: The refinement strategy for datasets could introduce selection bias and skew results. I would be interested in seeing results across a random subset of the test set on the benchmarks used.\\n\\n**Comment**: We appreciate this insightful observation. Following [1], the refinement strategy was designed to improve the focus and relevance of the dataset by prioritizing samples that were incorrectly answered in our specific tasks, as detailed in `Algorithm 1` of the paper. Notably, for the Chess Move Validity dataset, we considered all 1,000 problems (or samples) for all the experiments conducted in the paper. The reason for downsizing the GSM8K and PIQA datasets is that their performance is largely saturated for the considered LLMs. To demonstrate the efficacy of our approach, we downsampled these datasets using `Algorithm 1`. As for the StrategyQA dataset, it contains 2290 questions, making it prohibitively expensive to conduct experiments on all samples [1]. However, we believe that testing on the full dataset would help assess the robustness of our approach and verify whether the observed results consistently hold across diverse data splits. Moreover, we have conducted additional experiments on the GSM8K dataset, using all samples within the Multi-Agent Debate framework, as noted below: \\n\\n| **Dataset** | **Open Debate** | Controlled Debate (Right) | Controlled Debate (Wrong) |\\n|-------------------|--------|--------|--------|\\n| ***GSM8k (300 samples chosen using algorithm 1)*** | 89.67% | 93.00% | 93.33%|\\n| ***GSM8k (all 1319 samples)*** | 93.67% | 94.67% | 95.00%|\\n\\n---\\n\\n> W4: The paper spends its first 5.5 pages providing a background on the problem and multi-agent LLM settings. This takes away from its key contributions, which are limited to the problem reframing strategy and very introductory work on scaling laws around fidelity. Section 6.2 is extremely limited and does not back up its claims with linked experiments.\\n\\n**Comment**: We appreciate the reviewer's concerns regarding the balance between background content and key contributions. The detailed background section was included to contextualize our work for a broader audience, but we recognize that it may detract from the primary contributions. In a revised version, we will streamline the background content to focus on essential context and allocate more space to elaborate on our contributions. \\n\\nRegarding Section 6.2, we confirm that all our claims are derived from `Figures 5 and 6` of the paper. We acknowledge this and sincerely apologize for not including the corresponding figure references to support the observations and claims related to scaling laws and fidelity. In the revised manuscript, we will address this by adding the appropriate figure references and providing quantitative evidence in the appendix, like https://openreview.net/forum?id=EP6n8LCEK6¬eId=etZ2jBIeZX.\\n\\n---\\n\\n> W5: The appendix presents examples of model outputs; however, it does not provide examples of inputs to the models (especially in the problem reframing setting). I\\u2019ve posed questions around these examples in the Questions section of my review.\\n\\n**Comment**: \\nWe understand that providing inputs alongside outputs is crucial for a comprehensive understanding of the examples, particularly in the problem reframing setting. In future iterations, we will ensure that the appendix includes complete input-output pairs for all presented examples. This will provide greater clarity and transparency, addressing the questions raised and allowing for a more thorough evaluation of the methods proposed.\\n\\n---\", \"ref\": \"[1] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325,\"}",
"{\"comment\": \"> W1: Could have discussed a greater variety of biases other than confirmation bias\\n\\n**Comment**: \\nIn `Section 2` `(lines 115\\u2013123)`, we discuss various biases examined in prior works, along with our findings regarding different biases in D2C frameworks. Specifically, we uncover an inherent confirmation bias in D2C systems and propose a problem reframing strategy to mitigate it. \\n\\nAdditionally, in `Section 6.1.2` `(lines 444\\u2013469)`, we highlight: \\n1. **Affirmative vs. negative agent bias**: Explored in both open and controlled debate scenarios (refer to `Figure 3`). \\n2. **Social biases**: Including gender (or sex) bias, such as male or female bias (refer to `Figure 4`, right bars), and political bias, such as left- or right-wing bias (refer to `Figure 4`, middle bars). \\nWe believe these discussions provide a broader perspective on biases beyond confirmation bias and will clarify them further in the revised manuscript.\\n---\\n\\n> W2: It isn't clear how questions of real-world importance that are open-ended can always be brought into binary form.\\n\\n**Comment**: Replied in the general response.\\n\\n---\\n\\n> W3: Conceptual advances are limited\\u2014scaling laws and reframing techniques themselves feel rather incremental\\n\\n**Comment**: \\nWe acknowledge the reviewer\\u2019s concern about the perceived incremental nature of the scaling laws and reframing techniques. While these methods build upon existing frameworks, our contribution lies in their novel application to D2C systems. Specifically, we demonstrate: \\n- **Scalability**: The potential and limitations of collaborative agent frameworks for real-world tasks versus simpler tasks. `[Section 6.2]`\\n- **Reframing strategies**: The ability to mitigate confirmation bias while maintaining performance. `[Section 6.1.1, Lines: 419-442]`\\n\\nThese findings provide actionable insights for improving agent collaboration and addressing biases, advancing the understanding of D2C frameworks in significant ways. We will clarify this contribution in the revised manuscript.\\n\\n---\\n\\n> W4: Line 216 \\\"menifest\\\"\\n\\n**Comment**: \\nThank you for pointing out the typographical error. The spelling of \\\"menifest\\\" has been corrected to \\\"manifest\\\" in the revised manuscript. `[Line 216]`\\n\\n---\\n\\n> Q1: How do you prevent bias in the debate judgments?\\n\\n**Comment**: \\nWe appreciate the reviewer\\u2019s interest in preventing bias in debate judgments. Our current work focuses on uncovering and addressing confirmation bias through a problem reframing strategy. While this strategy effectively mitigates confirmation bias in D2C systems, we acknowledge the need for broader investigations into bias prevention in debate judgments. \\n\\nFuture studies will explore additional mechanisms for detecting and addressing subtle biases, and we plan to address these challenges in subsequent work.\"}",
"{\"title\": \"Feeback on rebuttal\", \"comment\": \"Dear reviewer,\\n\\nThank you once again for your time in reviewing our paper and providing valuable feedback. As the discussion period ends tomorrow, we are reaching out to see if you have any further questions or pending issues.\\n\\nWe have aimed to address your comments regarding the oversimplification of problems, scalability limitations, and the trade-offs involved in bias mitigation. The paper has also been revised to incorporate these considerations.\\n\\nPlease let us know if you have any follow-up comments or require additional clarifications.\\n\\nBest regards,\\n\\nThe authors\"}",
"{\"comment\": \"> W1: This paper mainly proposes a benchmark to test and validate these challenges rather than further addressing them.\\n\\n**Comment**: \\nWhile the APF benchmark is a significant contribution of this paper, we would like to emphasize the actionable strategies proposed and linked with experiments to address prejudice and fidelity challenges. For example: \\n\\n- **Mitigating Confirmation Bias**: \\n We introduce problem reframing with controlled initialization (right or wrong solutions) for both model-label and society-label frameworks `[Lines: 419\\u2013442]`. `Table 1` showcases three settings: open (vanilla D2C framework), controlled (right), and controlled (wrong) initialization. Notably, for a complex task like Chess Move Validity on GPT-4o, we achieve a 9.7% performance improvement over the open-ended framework `[Table 1]`. Similarly, `Table 2 (Appendix)` provides performance data across four models and three settings, reinforcing the superiority of the problem reframing strategy. \\n\\n- **Judge Bias Ratio**: \\n Our analysis explores how initial agent roles affect \\\"judge bias ratio\\\" in frameworks like Debate. By quantifying bias ratios `[Lines: 444\\u2013457, Figure 3]`, we show that controlled debate settings significantly decrease affirmative bias influence, particularly for PIQA and StrategyQA, while increasing negative bias. This shift highlights the impact of control mechanisms during debates on judgment formation `[Lines: 444\\u2013457]`. In the revised version, we also propose a potential solution to address social biases in D2C frameworks. \\n\\n- **Exploring Fidelity**: \\n We analyzed scaling behavior in agent interactions and resource usage `[Section 6.2]`. Key findings include: \\n 1. **Improved Scaling**: Complex tasks like Chess Move Validity benefit significantly from scaling resources. `[Figure 5(a), Lines: 496\\u2013510]`\\n 2. **Saturation Effects**: Simpler tasks like PIQA show performance plateaus after four agents, indicating diminishing returns. `[Lines: 504\\u2013510]`\\n 3. **Trade-offs in Agent Interactions**: Excessive scaling of interaction rounds leads to degraded performance due to coordination complexity, especially in society-level frameworks. `[Lines: 510\\u2013517, Figure 6]`\\n\\nThese insights and solutions enhance fairness and robustness in multi-agent interactions, offering practical guidance for real-world applications and valuable tools for refining such systems beyond benchmarking.\\n\\n---\\n\\n> W2: In more complex scenarios, problem reframing is difficult.\\n\\n**Comment**: Replied in the general response.\\n\\n---\\n\\n\\n> Q1: In Figure 1, should the question be \\\"A ship travels 80 miles east/west and 150 miles north. How far is the ship from its starting point?\\\"\\n\\n**Comment**: \\nThank you for bringing this to our attention. The question has been updated in the revised version.\\n\\n---\\n\\n> Q2: D2C instead of C2D, e.g., Section 5 Debatepedia, Dataset Problem Reframing, etc.\\n\\n**Comment**: \\nThank you for pointing this out. We have corrected the references to \\\"D2C\\\" in the revised version. Please refer to `lines 287 and 295`.\\n\\n---\\n\\n> Q3: Inconsistent symbol representation. In Section 3, C stands for the total number of calls, whereas in Section 4, C stands for Agent Count.\\n\\n**Comment**: \\nThank you for highlighting this inconsistency. The symbols have been unified in the revised version.\"}",
"{\"title\": \"Additional Results for W4\", \"comment\": \"---\\n\\n`Figures 5 and 6` of the paper illustrate the averaged performance versus resource usage of various LLMs in multi-label and society-level frameworks across four datasets. These figures evaluate different parameters, including (1) the number of agents, (2) the number of debate rounds, (3) the number of tokens, and (4) LLM API calls. In the revised manuscript, we will provide quantitative measures for all the subplots in these two figures, similar to the sample table shown for `Figure 5 (b)`.\\n \\n### Table R1 for `Figure 5 (b)`\\nThe averaged accuracy (Acc) of GPT-4o in multi-agent frameworks on four datasets, with ratios of samples where the number of rounds happend ($n$) equals and exceeds 1.\", \"for_brevity\": \"- **Open**: Open-ended. \\n- **CR**: Controlled (right). \\n- **CW**: Controlled (wrong). \\n\\n| **Dataset** | | **Open** | | | **CR** | | | **CW** | |\\n|-------------------|-------|------------|--------|-------|------------|--------|-------|------------|--------|\\n| | Acc. | n = 1 | n > 1 | Acc. | n = 1 | n > 1 | Acc. | n = 1 | n > 1 |\\n| ***GSM8k*** | 93.67 | 98.67% | 1.33% | 94.67 | 98.33% | 1.67% | 95.00 | 96.67% | 3.33% |\\n| ***PIQA*** | 92.33 | 96.00% | 4.00% | 92.00 | 97.67% | 2.33% | 91.00 | 99.00% | 1.00% |\\n| ***StrategyQA*** | 80.33 | 92.67% | 7.33% | 79.33 | 90.67% | 9.33% | 79.67 | 88.67% | 11.33% |\\n| ***Chess*** | 67.00 | 59.33% | 40.67% | 74.67 | 65.67% | 34.33% | 79.67 | 66.00% | 34.00% |\"}",
"{\"comment\": \"Replied in the general response.\\n\\n\\n> W3: Scalability Constraints\\n\\n**Observation**: The performance degradation observed in saturated systems indicates a limitation in scaling D2C frameworks effectively.\\n\\n**Comment**: \\nOur experimental observations reveal that scaling improves model performance in more complex tasks, such as Chess Move validity. Adding more agents significantly enhances strategic diversity in these scenarios. However, saturation occurs in simpler tasks; for instance, with the PIQA dataset, performance saturates when adding more than four agents. \\n\\nThus, while scalability constraints may appear in simpler tasks, they are less prominent for complex, real-world problems, where scaling continues to contribute to performance gains. \\n\\n---\\n\\n> W4: Bias Mitigation Trade-offs\\n\\n**Observation**: While the approach reduces biases, it may inadvertently introduce new limitations or biases in certain contexts.\\n\\n**Comment**: \\nWe appreciate the opportunity to address the potential trade-offs involved in bias mitigation. While our framework demonstrates efficacy in reducing biases, we acknowledge the possibility of introducing new limitations or biases, particularly in under-represented domains or tasks. \\n\\nIn the revised manuscript, we will: \\n- Discuss how task-specific characteristics may influence the redistribution or amplification of biases. \\n- Highlight potential limitations in monitoring and mitigating emerging biases during scaling or deployment. \\n\\nThese additions will provide a more balanced perspective on the advantages and trade-offs of our approach.\\n\\n---\\n\\n> Q1: Why do you study D2C frameworks rather than other MAS frameworks? Is D2C a typical and widely adopted MAS framework? What are the incentives behind this choice?\\n\\n**Comment**: \\nThank you for your question regarding our selection of the Diverge-to-Converge (D2C) framework for our study.\\n1. **Adoption and Typicality of D2C**: D2C is indeed a typical and widely adopted MAS framework. We conceptualize this adoption as part of a broader trend where more MAS frameworks are embracing the D2C paradigm. This trend is exemplified by the frameworks we have explored in our paper, including self-consistency, consultancy, debate, and LLM agents society, which all follow the D2C approach. Some later follow-up works include [1-4].\\n\\n2. **Benefits and Observations of D2C**: Based on these D2C frameworks, we have been able to identify inherent characteristics such as prejudice and fidelity, which arise from the framework\\u2019s encouragement of agent divergence. This divergence allows for a broad exploration of solutions, which is crucial for potential improvement.\\n\\n[1] Wang, Junlin, et al. \\\"Mixture-of-Agents Enhances Large Language Model Capabilities.\\\" arXiv preprint arXiv:2406.04692 (2024).\\n\\n[2] Li, Dawei, et al. \\\"SMoA: Improving Multi-agent Large Language Models with Sparse Mixture-of-Agents.\\\" arXiv preprint arXiv:2411.03284 (2024).\\n\\n[3] Li, Yunxuan, et al. \\\"Improving Multi-Agent Debate with Sparse Communication Topology.\\\" arXiv preprint arXiv:2406.11776 (2024).\\n\\n[4] Zhang, Guibin, et al. \\\"Cut the crap: An economical communication pipeline for llm-based multi-agent systems.\\\" arXiv preprint arXiv:2410.02506 (2024).\\n\\n---\"}",
"{\"summary\": \"This paper focuses on the Diverge-to-Converge (D2C) frameworks and highlights the challenges of prejudice and fidelity in D2C frameworks. The authors define prejudice and fidelity as the performance variation under changed conditions and scaling laws, respectively. To evaluate prejudice and fidelity, this paper introduces APF-Bench using the proposed Dataset Refinement. The results confirm the findings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-structured and easy to follow. The inclusion of informative figures and tables enhances clarity.\\n2. This paper reveals two key challenges: the impact of initial conditions and the number of agents on the final performance.\\n3. The experiments span many task-domains and multiple models.\", \"weaknesses\": \"1. This paper mainly proposes a benchmark to test and validate these challenges rather than further addressing them.\\n2. In more complex scenarios, problem reframing is difficult.\", \"questions\": \"Minor comments:\\n1. In Figure 1, should the question be \\\"A ship travels 80 miles east/west and 150 miles north. How far is the ship from its starting point?\\\".\\n2. D2C instead of C2D, e.g. Section 5 Debatepedia, Dataset Problem Reframing, etc.\\n3. Inconsistent symbol representation. In Section 3, C stands for the total number of calls, whereas in Section 4, C stands for Agent Count.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank You to Reviewer QY19\", \"comment\": \"Dear Reviewer QY19,\\n\\nWe deeply appreciate your recognition of our work and your insightful feedback, which has been essential in enhancing both the robustness and quality of our study.\\n\\nThank you once again for your valuable support! We are truly thankful for your time and thoughtful consideration.\\n\\nBest regards, \\n\\nThe Authors\"}",
"{\"title\": \"Thank You to Reviewer xVhp\", \"comment\": \"Dear Reviewer xVhp,\\n\\nWe deeply appreciate your recognition of our work and your insightful feedback, which has been essential in enhancing both the robustness and quality of our study.\\n\\nThank you once again for your valuable support! We are truly thankful for your time and thoughtful consideration.\\n\\nBest regards,\\n\\n The Authors\"}",
"{\"title\": \"General Response\", \"comment\": \"### Common Questions\\n\\n- **`W1 (z9FH):`** Limited Real-World Testing: The findings might lack generalizability if not tested in diverse, real-world multi-agent scenarios. \\n- **`W2 (z9FH):`** Potential Oversimplification: Reframing problems as binary questions may oversimplify complex tasks, possibly limiting the depth of solutions. \\n- **`W2 (EH5M):`** It isn't clear how questions of real-world importance that are open-ended can always be brought into binary form. \\n- **`W1 (QY19):`** The paper explores only problem reframing as a bias mitigation strategy. However, not every problem can be converted into a binary problem, and other strategies are not explored at all. \\n- **`W2 (xVhp):`** In more complex scenarios, problem reframing is difficult. \\n\\n---\\n\\n**Comment**: \\n\\nApologies for any confusion regarding the oversimplification of reframing problems as binary questions. We acknowledge that not all tasks can be reduced to binary questions such as \\u201cyes/no\\u201d or \\u201ctrue/false.\\u201d Instead, we present **controlled initialization** through problem reframing for multi-agent systems by leveraging the **potential solution space** of a given problem. \\n\\n### Dataset Examples \\nIn this study, we consider four diverse datasets (for confirmation bias) that address unique, real-world task types of importance [line 266]: \\n\\n1. **PIQA** `[Lines: 270\\u2013273]`: \\n Contains questions with a **binary solution space**, where either \\\"statement-1\\\" or \\\"statement-2\\\" is correct. \\n\\n2. **StrategyQA** `[Lines: 274\\u2013277]`: \\n Also contains questions with a **binary solution space**, where the answer is \\\"Yes\\\" or \\\"No.\\\" \\n\\n3. **GSM8K** `[Lines: 278\\u2013281]`: \\n Includes questions with a **non-binary solution space**; the solution lies in $ \\\\mathbb{R} $ (real numbers). \\n\\n4. **Chess Move Validity** `[Lines: 282\\u2013285]`: \\n Similar to GSM8K, this dataset has a **non-binary solution space**. While it is commonly stated that there are 64 possible answers formatted as $[a-h][1-8]$, representing potential chess moves, the actual number of valid moves may vary depending on the specific state of the chessboard (e.g., piece positions, legal moves). Therefore, the solution space dynamically adjusts to the context of the game. Each generated answer was deemed correct as long as it was one of the valid answers in the sequence.\\n\\n\\n### Controlled Initialization Framework \\nLet $Q$ denote a question, $C_A$ the correct answer space, and $C_W$ the wrong answer space. Controlled initialization is structured as follows: \\n- For **controlled wrong initialization**, the prompt is: \\n _\\u201cIs $C_W$ the correct answer to the question?\\u201d_ \\n- For **controlled right initialization**, the prompt is: \\n _\\u201cIs $C_A$ the correct answer to the question?\\u201d_ \\n\\nFor binary tasks like **PIQA** and **StrategyQA**, where $C_W = \\\\sim C_A$, the solution space is straightforward. However, for non-binary tasks: \\n- **GSM8K**: $C_A \\\\subset \\\\mathbb{R}$ (correct numerical solution), and $C_W = \\\\mathbb{R} \\\\setminus C_A$. \\n- **Chess Move Validity**: $C_A$ is the set of valid answers out of 64 possible moves, and $C_W$ encompasses the complement of valid moves. \\n\\n### Broader Applicability and Frameworks\", \"these_controlled_initializations_are_initiated_on_the_affirmative_side_during_the_starting_round_for_both\": \"1. **Multi-label frameworks** (e.g., Self-consistency, Debate, Consultancy). \\n2. **Society-label frameworks** (e.g., CAMEL, LangChain, AutoGen). \\n\\nBy explicitly tailoring the initialization to the solution space, we maintain flexibility to address task-specific complexity. In the revised manuscript, we have incorporated these clarifications and further detailed the methodologies discussed above to emphasize the adaptability of our approach.\\n\\n---\\n\\n### Manuscript Updates\\nWe have marked out the updated contents in blue in the pdf. Specifically, the updates include:\\n\\n1. More detailed dataset description on the searching space of solutions, in Section 5.\\n\\n2. Detailed explanation on the controlled initialization, in Appendix A.\\n\\n3. Further discussion on contribution, with the comparison with other LLM Debias Directions, in Appendix E.\\n\\n4. Numerical results for figures, in Appendix F.\\n\\n5. More detailed revisions including typos, layout adjustment, revised figure captions, etc.\"}",
"{\"metareview\": \"While this work tackles an important and interesting topic in so-called multi-agent LLM systems, I believe the current work is not ready to be published. The current work somewhat overstates its contributions by framing the paper as a broad study of prejudice and bias in multi-agent systems (as evident in the title), but only looks at confirmation bias (as pointed out by Reviewer EH5M). The experiments in the paper primarily make speculative claims about the underlying reason for the observed bias, based on the final task performance, while foregoing the opportunity to look deeper into the inference chains leading to these results. A quantitative analysis of the intermediate outputs of these multi-agent systems would provide a richer understanding of the bias studied in this work.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers point out that this work overly simplifies the problem initially motivated in the paper. In particular I am aligned with Reviewer EH5M's concerns, which were not sufficiently addressed in the authors' rebuttal.\"}",
"{\"comment\": \"> Q1: Page 18, Case 2, GSM8k: Could the authors provide complete inputs to the models and their outputs for each iteration?\\n\\n**Comment**: \\nThank you for highlighting this. We will include complete inputs and outputs for each iteration in the GSM8k case study in the appendix of a revised version of the paper. This will ensure that readers can fully understand the iterative process and its impact on the results.\\n\\n---\\n\\n> Q2: Is there a hypothesis around why the results hold and such biases occur in language models? Are there reasonable tests that can be conducted around this?\\n\\n**Comment**: \\nWe hypothesize that the observed biases stem from the inherent structure of pretraining data and the optimization objectives used in training language models. These factors can lead to over-representation or under-representation of certain patterns. To validate this hypothesis, we plan to conduct controlled experiments that isolate specific biases and evaluate their persistence across diverse datasets and tasks. For instance, ablation studies and interventions in pretraining data distribution could provide insights into the underlying causes of such biases.\\n\\n---\\n\\n> Q3: Could there exist better reframing techniques? Why was the binary reframing technique selected? Will it work for all tasks?\\n\\n**Comment**: \\nOur controlled (right or wrong) initialization through problem reframing was chosen for its simplicity and ease of implementation, making it a suitable starting point for exploring problem reframing. However, we acknowledge that more nuanced reframing techniques could yield better results, especially for complex tasks. Future work will investigate alternative reframing strategies, such as multi-dimensional reframing or task-specific dynamic reframing. We will also evaluate the generalizability of these techniques across different tasks to determine their broader applicability.\"}",
"{\"comment\": \"Dear Reviewer z9FH,\\n\\nI hope this message finds you well. We have addressed the concerns raised in our revised manuscript and would greatly appreciate your review and further comments.\\n\\nThank you for your time and expertise.\\n\\nBest regards,\\n\\nThe authors\"}",
"{\"comment\": \"Thank you for clarifying the contribution and I have raised my score.\"}",
"{\"comment\": \"I thank the authors for their response. While I acknowledge the authors' clarifications, overall I believe the paper's contributions are nevertheless on the incremental side. A paper that I would feel comfortable with accepting would need to demonstrate some kind of dynamic adaptation to the authors' findings, including using fine-tuning or other post-training approaches to improve the weaknesses of D2C frameworks uncovered. However, in line with ICLR guidelines, I believe that such extensions are out of scope of the current submission. I keep my score.\"}"
]
} |
EP09OGPRzk | L-PINN: A Langevin Dynamics Approach with Balanced Sampling to Improve Learning Stability in Physics-Informed Neural Networks | [
"Minseok Jeong",
"Giup Seo",
"Euiseok Hwang"
] | Physics-informed neural networks (PINNs) have emerged as a promising technique solving partial differential equations (PDEs). However, PINNs face challenges in resource efficiency (e.g., repeatedly sampling of collocation points) and achieving fast convergence to accurate solutions. To address these issues, adaptive sampling methods that focus on collocation points with high residual values have been proposed, enhancing both resource efficiency and solution accuracy. While these high residual-based sampling methods have demonstrated exceptional performance in solving certain stiff PDEs, their potential drawbacks, particularly the relative neglect of points with medium and low residuals, remain under-explored. In this paper, we investigate the limitations of high residual-based methods concerning learning stability as model complexity increases. We provide a theoretical analysis demonstrating that high residual-based methods require tighter upper bound on the learning rate to maintain stability. To overcome this limitation, we present a novel Langevin dynamics-based PINN (L-PINN) framework for adaptive sampling of collocation points, which is designed to improve learning stability and convergence speed. To validate the effectiveness, we evaluated the L-PINN framework against existing adaptive sampling approaches for PINNs. Our results indicate that the L-PINN framework achieves superior relative $L^{2}$ error performance in solutions while demonstrating faster or comparable convergence stability. Furthermore, we showed that our framework maintains robust performance across varying model complexities, suggesting its potential for compatibility with larger, more complex neural network architectures. | [
"Physics-informed neural network",
"Langevin dynamics",
"Adaptive sampling method"
] | Reject | https://openreview.net/pdf?id=EP09OGPRzk | https://openreview.net/forum?id=EP09OGPRzk | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zoYxSAKmpK",
"zRtjDZbuiq",
"zH8OmTn1Ku",
"whEzoy9Uar",
"vbh44XMKit",
"tIyaXGZ6f6",
"qaqFzv3wac",
"naN3WIZRpa",
"leU4lsuZZG",
"l5gtywQeNp",
"gDbWbN41qR",
"f6dzwWwA0S",
"eiv8lbKNbi",
"e8P2FIVpmI",
"dY5AnWCOSF",
"bPyHyXDwrJ",
"XetHgJtHkI",
"V3NSIyvf8W",
"Oy0epZZGce",
"OOu0WXBZ9t",
"NyWjf4BJb7",
"MYocHosYAZ",
"LnLQ7HIhOj",
"K4J9RBKPYH",
"JPSDzYv7So",
"JNP1oy0sbu",
"GUPMdaWOxI",
"GHNzqLYkWM",
"FKKoqjye7v",
"9DDcSJcEYA",
"7QeJ2KT1zY",
"4RBngrwjL5",
"0GUdiZsNO1"
],
"note_type": [
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision"
],
"note_created": [
1733104831270,
1732416122870,
1734664708176,
1730614666778,
1732937923970,
1732604847532,
1732418966723,
1729972882798,
1730061464287,
1732580402321,
1732606712852,
1732516173292,
1733194291601,
1732706050204,
1732086680570,
1732516152033,
1732546571071,
1732711782574,
1732086595154,
1732677843806,
1732086560083,
1733071410437,
1732873982974,
1729502647953,
1732719956775,
1732554580019,
1732086588699,
1732589757273,
1732724326180,
1732086579451,
1732516138619,
1732590399153,
1737523468699
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_SkJ6"
],
[
"ICLR.cc/2025/Conference/Submission1774/Area_Chair_DLKa"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_u9zX"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_u9zX"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_SkJ6"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_SaWy"
],
[
"ICLR.cc/2025/Conference/Submission1774/Area_Chair_DLKa"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_Zj8i"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_Zj8i"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_Zj8i"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_Zj8i"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_Zj8i"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_Zj8i"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Reviewer_SaWy"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1774/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"comment\": \"Dear Reviewer Zj8i,\\n\\nAlthough we may not have directly addressed the issue in question, we are glad that we were able to partially address the points of discomfort you raised. Thank you for your continued interest and thoughtful attention! We will make additional adjustments based on the final comments you provided.\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"comment\": \"I appreciate the authors' revision. The mathematical derivation for nonlinear PDEs makes sense to me overall. However, I noted an error in Figure 6: the title of each plot contains \\\"layer layer.\\\"\\n\\nTo further clarify the figure, I suggest adding a sentence to the caption to explain that each plot corresponds to a single PINN model with a different number of layers. Initially, I thought the figure shows a deep PINN model with visualizations of $\\\\phi(x)$ for its hidden layers. Once this issue is addressed, I would be happy to increase my score.\"}",
"{\"metareview\": \"This paper considers improving PINN, which uses a neural network to represent the solution to a PDE and seek it by minimizing an integration of the PDE residual. Accurately evaluating the integrated residual is an outstanding challenge, because PINN is believed to particularly suit high dimensional problems, for which full integrations are however too expensive to perform. The authors proposed a Langevin SDE based approach to randomly sample collocation points for evaluating the residual in a balanced fashion, so that high residual locations are emphasized but low residual locations are not ignored either. Reviewers and I all agree this is an interesting idea. However, there were concerns about robustness / details of the implementation and inadequate evidence of improved performance over existing approaches, which remained unresolved after the rebuttal. Moreover, although the sampling problem matters more in high dimensions, only low dimensional PDEs were tested in the paper. In addition, there was insufficient comparison with non-PINN approaches. Therefore, I encourage the authors take these discussions into consideration and submit again.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer u9zX was concerned with the inadequate evidence of improved performance over existing approaches, which was not fully resolved post rebuttal.\\nReviewer Zj8i was concerned with robustness / details of the implementation, which was not fully resolved post rebuttal.\\nMoreover, although the sampling problem matters more in high dimensions, only low dimensional PDEs were tested in the paper. In addition, there was insufficient comparison with non-PINN approaches.\"}",
"{\"summary\": \"The paper presents a method to adaptively select the collocation points for solving partial differential equations (PDEs) through physics-informed neural networks (PINNs). In PINNs, the selection of the number and location of collocation points impacts the model training and is a well-known issue in the literature. Various methods have already been proposed in the literature, some of which have been compared in this paper. The paper presents a novel method for selecting the collocation points through empirical results and theory. The method named Langevin PINNs aims to focus not only on high-residual-based locations, but also proposes to balance it with selecting locations with low or medium residual values. Overall, the paper's motivation aligns with improving PINNs for simulating PDEs, showcasing its effectiveness on canonical examples.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is easy to follow for non-theory parts and presents a concise overview of the literature in this domain.\\n\\nThe choice of PDEs is diverse and incorporates diverse challenges observed while training PINNs for simulating PDEs.\\n\\nSample trajectory plots help observe the method's performance.\", \"weaknesses\": \"The rationale for showcasing the method's performance on deep networks is unclear. The advantage gained by the algorithm with a deep neural network is unclear. From Table 1, it seems like the baselines achieve a similar result even with a small network, so why would one opt for a further deep network which is even harder to optimize?\\n\\nThe method is not clearly explained, and it is unclear how to implement the proposed method. It would be appreciated if the authors could provide a more detailed explanation of the method and implementation. \\n\\nThe discussion on the computational complexity of the baseline methods is presented. However, the manuscript does not compare the proposed method's computational cost with the baselines. Can the authors provide such a comparison of the computational cost? It would help the readers analyze the advantages of the proposed method in terms of computational cost.\\n\\nAlthough compared with similar methods, can the authors compare their method with RAR-D presented in [1] or justify why the comparison is/should not be performed?\\n\\nIt is difficult to understand what Fig. 5 shows. It seems like the performance of the proposed method is completely off for a deeper network, contrary to the text in the article.\\n\\nAlthough a sensitivity analysis of the proposed method is carried out with different learning rates, the range seems limited. How do the methods perform when trained at an even smaller learning rate?\", \"limitations\": \"The proposed method is presented for low-dimensional problems, and validating its performance on high-dimensional problems is not performed. The selection of collocation points in higher dimensions is also a complex problem. The paper does not discuss how the method scales with the rise in dimensionality.\\n\\nAlong similar lines, the method is not performed for multiscale systems of PDEs, which have challenges in choosing the right collocation points. \\n\\n[1] Wu, Chenxi, et al. \\\"A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks.\\\" Computer Methods in Applied Mechanics and Engineering 403 (2023): 115671.\", \"questions\": \"Included along with weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer Zj8i,\", \"thank_you_for_your_insightful_comments_regarding_the_statement\": \"*\\\"As illustrated in Figure 4-(a), it can be observed that only L-PINN and RAD demonstrated stable performance when 10 hidden layers were used.\\\"* We agree with your observation that for smaller architectures (e.g., 4 and 6 hidden layers), the methods exhibit comparable stability. As for the claim that instability increases with the number of layers, we acknowledge that this observation alone may seem a bit stretched.\\n\\nHowever, regarding the accuracy of the statement, our original intent was to highlight the phenomenon observed specifically under the condition of using 10 hidden layers, where most algorithms exhibited instability, while only L-PINN and RAD demonstrated what we defined as \\\"stable performance\\\" (category 3, as defined above). If the statement had been generalized to claim instability \\\"as the number of layers increases\\\" or lacked the explicit reference to \\\"10 hidden layers,\\\" we agree that it could have been interpreted as overstated. However, since the statement explicitly references the observed behavior at 10 hidden layers, we believe it remains valid within its specific context.\\n\\nAdditionally, we appreciate your observation regarding learning rate trends, which further highlight L-PINN's consistent stability across a wide range of values. This observation reinforces our key contribution in analyzing the interplay of model complexity and learning rates on training stability. The robustness of L-PINN under varying learning rates underscores its practical utility and aligns closely with the broader objectives of this study.\\n\\nWhile we are unable to revise the manuscript at this stage, we will ensure this clarification is incorporated in the final version of the paper should it be accepted. We hope this response provides sufficient clarity regarding the intended scope of our claims.\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for providing additional experiments and discussions. However, in my opinion, this study does not improve the current literature regarding accuracy (as shown in Table I) and computational cost (discussed in Appendix H). The provided argument that the accuracy can be increased minorly with deeper networks is not ideal for solving canonical problems. The current challenge in PINN-based approaches is accuracy and computational cost. I do not see how the proposed method contributes to alleviating these challenges. Hence, I will continue with my initial assessment.\"}",
"{\"comment\": \"Dear Reviewer SKJ6,\\n\\nThank you for your positive feedback! Your insightful analysis has greatly contributed to enhancing the rigor of our manuscript. Additionally, we have revised the figure and its caption in response to the confusion you pointed out and included the updated version. Once again, we sincerely appreciate your thoughtful comments and suggestions. \\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"summary\": \"This paper examines the relationship between sampling strategies and learning stability in PINNs. The authors provide theoretical analysis showing that sampling methods focused on high-residual points require stricter learning rate constraints for stability, especially with increased model complexity. They present a Langevin dynamics-based PINN (L-PINN) framework that implements balanced sampling proportional to PDE residuals. They vaildate the effectiveness of the proposed sampling method across multiple PDEs, showing that L-PINN achieves comparable or better relative L2 error performance while maintaining stability across different model complexities and learning rates compared to existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method is novel and provides rigorous theoretical analysis of stability issues in high-residual sampling methods. The authors establish clear mathematical connections between sampling strategies and learning rate constraints, backing their theoretical claims with detailed proofs. This helps explain the stability challenges that emerge as model complexity increases.\\n\\n2. Another strength is the paper's thorough empirical validation. The experiments span multiple representative PDEs (Burgers', Convection, Allen-Cahn). The authors carefully compare their approach against various sampling methods, with comprehensive ablation studies on learning rates and network depth.\", \"weaknesses\": \"1. The assumption 3.1 seems to be too strong for nonlinear PDEs. But I think it is ok for linear PDEs.\\n\\n2. Section 5.2's experimental results are dense and difficult to follow - this content could be better organized with supporting details moved to the appendix. \\n\\n3. The authors overlook relevant prior work, particularly PirateNet [1], which addresses similar scaling challenges in training deep PINN models through adaptive skip connections.\\n\\n[1] Wang, S., Li, B., Chen, Y. and Perdikaris, P., 2024. PirateNets: Physics-informed Deep Learning with Residual Adaptive Networks. arXiv preprint arXiv:2402.00326.\", \"questions\": \"1. What exactly are the feature vectors being visualized in Figure 6? The paper relies on Assumption 3.1 to express PDE residuals as linear combinations of feature-mapped vectors, but does not clearly define how these feature vectors \\u03d5(x) are obtained for nonlinear PDEs. Are these simply outputs from hidden layers, or do they incorporate PDE residual information? The authors should provide precise mathematical expressions for the visualized quantities.\\n\\n2. If these feature vectors are merely hidden layer outputs without considering PDE residuals, there appears to be a significant gap in the paper's logic. How do these empirical visualizations justify Assumption 3.2 about heavy-tailed distributions of feature vectors that appear in Eq 3.1? This disconnect between the theoretical framework and empirical validation needs to be addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper investigates the limitations of high residual-based methods concerning learning stability as model complexity increases. The authors claim two questions which remain unclear: lack of theoretical analysis of the balancing effect and potential risks of the high residual method. They provide a theoretical analysis and propose Langevin dynamics-based PINN (L-PINN) framework for adaptive sampling of collocation points. The paper also compares the performance between L-PINN and other adaptive sampling methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper investigates the limitations of high residual-based methods concerning learning stability as model complexity increases.\", \"This work provides a theoretical analysis and propose Langevin dynamics-based PINN (L-PINN) framework for adaptive sampling of collocation points.\", \"The paper is well-written and easy to follow.\", \"The figures and tables are clear.\"], \"weaknesses\": [\"The other adaptive sampling methods (baselines) compared in the paper have some important hyperparameters which could significantly affect the performance. For example, in RAD, $k$ and $c$ are important and improper values of them will fail the method. I cannot find any description about the hyperparameters of the baseline methods. I am concerned about if the baselines are well-trained.\", \"It is true that applicability to large models is important for these methods. However, Applying simple 1-D PDEs to deep MLPs may not be a good choice.\", \"These PDEs do not need such deep MLPs at all. Two or three hidden layers are enough for training the model.\", \"Increasing the depth of MLPs will make the training unstable.\", \"On the other hand, increasing width will also increase the size of the MLPs and will not make the MLPs unstable. The paper could do a comparison on this.\", \"It might be more meaningful to adopt other multiple layers like attention layers and Fourier layers instead of deep MLPs. I am concerned about whether L-PINN can be applied to the scenarios that really need large models and still be robust.\"], \"questions\": [\"\\u201cAdaptive sampling based on residual distribution\\u201d is introduced at first and then the \\u201cAdaptive sampling focused on high residuals\\u201d. Why do the authors claim the latter method is used to address convergence issues of the former method? Actually, RAR can be regarded as a special case of RAD or RAR-D. The RAD method is used to address the issue of RAR focusing too much on high residuals. RAD has less convergence issues than RAR.\", \"RAD methods have experimentally discussed the \\u201cUnresolved questions 2, Potential risks of the high residual method\\u201d (lines 136-139). I admit this paper discusses this problem from a different aspect. But I would like to see what is unclarified or unsolved of RAD with respect to \\u201cUnresolved questions 2\\u201d.\", \"A period is missing in line 148.\", \"In figure 4, I suggest adding standard deviations. For figure 4b, it seems L-PINN only performs better when the learning rate is 0.003. This might result from randomness. I would like to see more cases when the learning rate is between 0.002 and 0.004.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewers u9zX, SaWy,\\nIf not already, could you please take a look at the authors' rebuttal? Thank you for this important service.\\n-AC\"}",
"{\"comment\": \"Dear Reviewer u9zX,\\n\\nWe regret that we may not have fully addressed the concerns you raised. In the main text (Table 1), our primary focus was on theoretical analysis and its empirical validation, particularly emphasizing the instability that arises as model complexity increases. Regarding your concerns about accuracy and complexity, we fully understand their significance. Within the limited rebuttal period, we aimed to partially explore these aspects by including an evaluation of 2D PDE performance in Appendix I (w.r.t. accuracy), along with reasoning related to the uncertainty in MCI estimation for high-dimensional PDEs. Additionally, we highlighted that the gradient-based sampling distribution approximation, which does not rely on directly approximating MCI, is experimentally robust in terms of computational complexity with respect to $l_L$, further supporting its practical applicability.\\n\\nThank you once again for pointing out these important aspects and providing constructive feedback!\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"comment\": \"Dear Reviewer u9zX:\\n\\nAs the author-reviewer discussion period is nearing its conclusion, we kindly request you to review our responses at your earliest convenience. Should you have any additional questions or comments, we will make every effort to address them before the discussion period ends.\\n\\nWe sincerely appreciate your time and valuable feedback. We look forward to hearing from you soon!\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"comment\": \"We sincerely appreciate the reviewers' insightful feedback, which has been instrumental in enhancing the quality, depth, and clarity of our work. Below, we summarize the key strengths of our approach, address the primary concerns raised, and highlight the additional efforts made during the discussion phase.\\n\\n---\\n\\n### Key Strengths of Our Approach\\n\\n1. **Theoretical Contributions:** \\n While previous studies have demonstrated the success of adaptive sampling methods in improving model performance, a rigorous theoretical framework to analyze these sampling strategies has been limited. Our work bridges this gap by proposing a detailed theoretical analysis that examines sampling methods in relation to model complexity, residual concentration, and stability. This framework not only provides insights into existing methods but also lays the groundwork for advancing adaptive sampling strategies in PINNs.\\n\\n2. **Novelty:** \\n By integrating Langevin dynamics into Physics-Informed Neural Networks (PINNs), our approach introduces a novel mechanism that fundamentally enhances learning stability. Unlike existing methods that rely on Monte Carlo integration, our method avoids direct reliance on these techniques, which makes it inherently more robust when applied to higher-dimensional PDEs. This innovation addresses critical challenges in scaling PINNs to complex problems.\\n\\n3. **Comprehensive Validation:** \\n To validate the proposed theoretical framework, we designed experiments explicitly aligned with the analysis presented in the manuscript. These experiments evaluate key aspects, including the impact of model complexity, residual concentration, and adaptive sampling on learning stability. The results not only reinforce the theoretical insights but also demonstrate the robustness and generalizability of L-PINN across a wide range of PDE scenarios.\\n\\n---\\n\\n### Addressing Reviewer Concerns\\n\\n1. **Mathematical Clarification of Feature Vectors:** \\n To address feedback regarding feature vector construction, we revisited the mathematical formulation and clarified the derivation process in **Appendix A**. Specifically, we refined the representation of the feature vectors to better align with assumptions. This update resolves ambiguities and ensures consistency with the proposed theoretical framework, strengthening the connection between theory and practice.\\n\\n2. **Scalability to Complex Scenarios:** \\n To address concerns regarding scalability, we conducted new experiments on two-dimensional PDEs, including Burgers' and Heat equations, which are outlined in **Appendix I**. Additionally, we performed a computational complexity analysis, detailed in **Appendix H**, to evaluate the efficiency of L-PINN in complex scenarios (more collocation points, higher dimension). These results highlight L-PINN's ability to efficiently scale to complex scenarios without compromising stability or performance.\\n\\n3. **Comparative Analysis with Alternative Architectures:** \\n In response to reviewer feedback, we expanded our comparative analysis to include alternative architectures, such as random Fourier features, attention mechanisms, and modified MLPs (e.g., incorporating residual blocks) detailed in **Appendix J**. These additional experiments confirmed that L-PINN maintains compatibility and effectiveness across diverse model structures, further underscoring its adaptability and versatility.\\n\\n4. **Clarification of Explanations:** \\n We recognize the ambiguities in certain parts of our manuscript, particularly regarding distinctions between methods like RAD and RAR-D and their theoretical implications. Additionally, we acknowledge that the interpretation of **Figure 4-(a)** may have introduced confusion. While immediate revisions are not feasible, we will address these points in the final version by clearly separating and structuring the relevant discussions and improving the explanation of experimental results, especially those in Figure 4-(a). These refinements aim to enhance the clarity and accessibility of our explanations.\\n\\n---\\n\\n### Final Remarks\\n\\nThe reviewers' constructive feedback has been invaluable in improving our manuscript and resolving potential ambiguities. Through additional experiments, expanded comparative analyses, and planned improvements to explanations, we have strengthened both the theoretical and experimental foundation of L-PINN.\\n\\nWe believe that our work offers a meaningful contribution to the field by providing a robust framework that improves learning stability in PINNs through balanced sampling strategies. Once again, we sincerely thank the reviewers for their thoughtful comments and constructive suggestions, which have greatly enriched our work.\", \"title\": \"Final Summary of the Rebuttal\"}",
"{\"comment\": \"I thank the authors for updating the box plot, which I now think is much clearer.\\n\\nDo you know what is the origin of the outliers? It could be that the learning rate used is too big and the loss spiked towards the end of training (e.g. for $L^\\\\infty$ and Random-R) or that it was too small and never improved (e.g. R3). Anyways, I believe that it is not something related to the specific methods themselves but just about their training. \\n\\nAs mentioned in the previous response, I believe that now it is no longer true that \\\"As illustrated in Figure 4-(a), it can be observed that only L-PINN and RAD demonstrated stable performance when 10 hidden layers were used.\\\". All competing methods and the proposed approach show stable performances across different number of layers.\\n\\nOverall, I believe that the proposed approach provides an interesting alternative to re-sampling methods with Langevin dynamics. Experimentally, this shows to provide less sensitivity to the learning rate. However, I do not see other advantages of the proposed method compared to other methods. For this reasons I would like to keep my score as is.\"}",
"{\"comment\": \"We appreciate your feedback on our paper. We have done our best to answer your keen questions.\\n\\n---\\n\\n> **W1** \\n\\nWe acknowledge that linking the importance of the learning rate to the flow of the text may create room for misinterpretation. However, the main claim of our paper regarding the learning rate is not \\\"PINN training requires a higher or lower learning rate,\\\" but rather, \\\"different algorithms prioritize concentration on high residuals differently, which, in turn, leads to stability issues depending on the learning rate and model complexity.\\\" We hope this point has been clarified. \\n\\nAdditionally, if I interpret your concern correctly, it suggests, \\\"Why not simply train with a lower learning rate?\\\" This raises an intriguing question for us as well. To explore this further, we expanded the cases and visualized the results in **Fig. 4-(c)**. Based on our findings, a learning rate of at least 0.0005 appears necessary for various sampling methods to undergo a meaningful learning process.\\n\\n---\\n\\n> **W2** \\n\\nYou are correct that, in practice, the parameters that require tuning when implementing Langevin dynamics are $\\\\tau$ and $l_L$. However, based on our experiments with the default Langevin hyperparameter settings ($l_L = 1$, $\\\\tau = 0.002$) across various conditions, we observed robust performance for most PDE problems. This even held true for high-dimensional PDEs, as added in **Appendix I**. For the PDEs we examined, it does not seem necessary to have a large $l_L$, which is a highly desirable outcome from the perspective of computational complexity analysis.\\n\\n---\\n\\n> **Q1** \\n\\nAccording to **Theorem 4**, the neural network $ f_\\\\theta $ is fixed (and thus the residual landscape remains unchanged), with the conditions requiring $ l_L $ to be sufficiently large and **$ \\\\tau$ to be sufficiently small**. From a practical implementation perspective, two factors must be considered: \\n1. The extent to which $f_\\\\theta$, i.e., the residual landscape, varies over iterations. Empirically, based on sample trajectories, we observed relatively low temporal variation. \\n2. $l_L$ and $\\\\tau$ must be inversely related, such that if $l_L$ increases, $\\\\tau$ should decrease, and vice versa.\", \"interpreting_the_results_in_appendix_g\": \"The Langevin update equation $\\\\mathbf{x}^{l+1}=\\\\mathbf{x}^{l}+\\\\frac{\\\\tau}{2}\\\\nabla_{\\\\mathbf{x}}|\\\\mathcal{R}_{\\\\theta}(\\\\mathbf{x}^{l})|^2+\\\\beta\\\\sqrt{\\\\tau}z$ indicates that, under low $\\\\beta$ (less exploration), performing multiple gradient updates with a relatively large step size $\\\\tau$ compared to $l_L$ causes sample points to cluster around local modes, resulting in an overemphasis on high residuals. As demonstrated in our prior analysis, this leads to degraded stability. Furthermore, from the perspective of distributional convergence in Langevin dynamics, this imbalance between $l_L$ and $\\\\tau$ also prevents convergence to the desired asymptotic population. \\n\\nConsequently, under relatively low $\\\\beta$, if $l_L$ is relatively large compared to the step size $\\\\tau$, sample points are drawn toward sharp modes in the residual landscape, inevitably resulting in a highly unstable learning process. \\n\\n\\n\\n---\\n\\n> **Q2** \\n\\nAs a follow-up to **Q1**, we are currently conducting experiments to compare the performance across varying $\\\\tau$ and $l_L$ under fixed $\\\\beta$ with proper balancing. However, due to resource constraints, we are unable to provide these results immediately. We will upload the findings as soon as they are completed.\\n\\n---\\n\\n> **Q3** \\n\\nExperimental results related to computational complexity have been included in **Appendix H**.\\n\\n---\\n\\n> **Q4** \\n\\nThe visualization shows the absolute value of the gap between the exact solution and the predicted solution. There seems to have been a mistake in the description, leading to confusion. To avoid this, we have revised the figure caption for greater clarity.\\n\\n---\\n\\nLastly, we have incorporated all the minor comments you provided.\"}",
"{\"comment\": \"Dear Reviewer SaWy:\\n\\nRegarding W2-4, we have included additional experimental results in Appendix J.\\n\\nAs the author-reviewer discussion period is nearing its conclusion, we kindly request you to review our responses at your earliest convenience. Should you have any additional questions or comments, we will make every effort to address them before the discussion period ends.\\n\\nWe sincerely appreciate your time and valuable feedback. We look forward to hearing from you soon!\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"comment\": [\"I would like to thank the authors for their answers and for running additional experiments, which should not be given for granted given the limited time for the rebuttal. And thanks for updating the pdf as well.\", \"As a general remark, I would suggest the authors (for future rebuttals) to consider color-coding the changes made to the pdf so that it is much clearer for reviewers where and what has been edited. I now compared the two pdfs and I have a clearer picture (so no need to do it now).\", \"Some further comments on the new results/edits:\", \"I especially liked the new Figure 4, which now displays uncertainties as well. Looking at the old version, I think the plot was not very informative and possibly misleading, since the uncertainties reveal that some results the previously looked clearly separated are now compatible within the uncertainties (for instance results for learning rate 0.002 and 0.003)\", \"With the new plots in Figure 4 I get the following insights: (i) a learning rate above 0.002 is too high for most methods, (ii) similarly, 0.0001 is too small and (iii) in the range 0.0001-0.001 the methods have comparable performance. I think my initial concern that a smaller learning rate would have been helpful for competing methods seems to be confirmed by the new results (previously it was explored only in the range 0.001-0.004). So I wouldn't say the proposed method results in better performance but rather that it is less sensitive to the (initial) learning rate (since anyways you are using Adam).\", \"Why are the results for the relative L1 error for n_layers=10 in the new Figure1a different from the one you had before? I noticed that now you have a log-scale but this still doesn't explain the difference with the previous results. According to the new Figure 4a it is not true anymore that \\\"As illustrated in Figure 4-(a), it can be observed that only L-PINN and RAD demonstrated stable performance when 10 hidden layers were used.\\\" Now every method except R3 is stable across architectures.\", \"The computational complexity in Appendix H I think is very useful. My take-away is that the proposed approach is computationally viable as long as $l_L\\\\leq5$, for which the runtime is already $\\\\times 5$ that of Fixed/Random/R3/RAD.\", \"Are the uncertainties in Figure 4 obtained by repeating the run with 5 different seeds as in Table 1? The same question applies to other results where $\\\\pm$ results are reported (e.g. Table 5, 6). I would suggest to write this explicitly all the times repeated experiments are reported (if space allows, possibly directly in the caption).\"]}",
"{\"comment\": \"Dear Reviewer Zj8i,\\n\\nThank you for your insightful comments and observations. Below, we provide a detailed response to your points:\\n\\nFirst, we select the model's performance based on the point in the training process where the overall loss curve reaches its lowest value. In other words, the reported performance is not based on specific values from sudden spikes in the loss. This behavior is clearly illustrated in the learning curve presented in **Appendix E.2**, specifically in the bottom of **Figure 10**.\\n\\n> **Question:** Do you know what is the origin of the outliers? It could be that the learning rate used is too big and the loss spiked towards the end of training (e.g. for $L^{\\\\infty}$ and Random-R) or that it was too small and never improved (e.g. R3). Anyways, I believe that it is not something related to the specific methods themselves but just about their training.\\n---\\n\\nRegarding the experimental setup, all algorithms utilized a learning rate of $\\\\eta = 0.001$, with a scheduler multiplying the learning rate by 0.9 every 5000 iterations. Our observations indicate that for the Allen-Cahn equation, the ability to drop below a threshold of 50 serves as a reliable benchmark for stable convergence. However, as layer depth increases, the number of iterations required to meet this threshold also grows, as detailed in **Appendix E.2, Figure 10**. This trend is particularly pronounced for $L^{\\\\infty}$ and R3, while L-PINN and RAD demonstrate relatively greater robustness under these conditions. Ultimately, this implies that for deeper networks, the learning rate must decay sufficiently before the relative $ L^2 $ error can drop below 50, which takes a significant number of iterations and can be interpreted as the cause of outliers. The reason why this becomes a cause of outliers will become clearer as we address the part you commented on below. \\n\\n> **Comment:** As mentioned in the previous response, I believe that now it is no longer true that \\\"As illustrated in Figure 4-(a), it can be observed that only L-PINN and RAD demonstrated stable performance when 10 hidden layers were used.\\\". All competing methods and the proposed approach show stable performances across different number of layers.\\n\\nWe also agree with your hypothesis that \\\"a smaller learning rate might enable unstable algorithms to learn effectively even with deep layers.\\\" This aligns with our interpretation of your comment, suggesting that \\\"if learning rates are appropriately adjusted as layers deepen, stability could be ensured for all algorithms.\\\" However, we note a potential issue with this approach: while a lower learning rate may enable earlier convergence below the threshold of 50, **it could also result in diminished learning ability during the remaining iterations due to reduced learning rates.** Although regarding this discussion pertains to the case where the number of layers is 4 without scheduler, the limitation arising from naively lowering the learning rate is evident in the results presented in **Figure 4-(c)** of our paper.\\n\\n\\n> **Concern:** However, I do not see other advantages of the proposed method compared to other methods. \\n---\\n\\nBased on the responses to the question and comment, we believe that the stability of our proposed L-PINN lies in its reduced sensitivity to both layer depth and learning rate, which we consider a significant strength of our algorithm. That said, we fully acknowledge the importance of the issue you raised and are committed to exploring it further.\\n\\n---\\n\\nTo address your concerns, we are planning to conduct additional experiments by setting the layer depth to 10 and using an initial learning rate of 0.0005. Furthermore, to provide better clarification regarding whether the occurrence of outliers is due to randomness, we will expand the experiments shown in **Figure 4-(a)** by adding five more seeds for the case of layer 10. We hope these efforts will help resolve any remaining uncertainties. However, due to the time required for additional experiments, we will upload these results as soon as they are completed, aiming to do so before December 2nd or 3rd, within the discussion period specified by ICLR, to enable further discussion based on your and our comments.\\n\\nIf you have additional questions or comments about these experiments, please do not hesitate to reach out. Once again, thank you for your valuable insights.\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"comment\": \"Thank you for your valuable review and constructive suggestions. We have addressed your comments point by point below.\\n\\n---\\n\\n> **W1** \\n\\nI also generally agree with the validity of the assumptions from an analytic perspective. However, I believe that these issues are somewhat mitigated during the learning process, leveraging the representation power of neural networks. A more detailed discussion on this topic is provided in **Q1**.\\n\\n---\\n\\n> **W2** \\n\\nThe PDE experimental settings for Section 5.2 have been moved to **Appendix F**.\\n\\n---\\n\\n> **W3** \\n\\nThis was a paper we had not previously reviewed, but upon examination, we found it to be a meaningful work with motivations closely aligned with ours. As such, it has been added as a reference addressing scalability with respect to model complexity. \\n\\nThat said, while this paper approaches the stability issue from a model architecture perspective, our focus is on adaptive sampling techniques, highlighting a key difference. However, as suggested in this paper, it is indeed meaningful to investigate performance metrics relative to model architectures. Furthermore, it is important to study how these structures interact with adaptive sampling methods. \\n\\nTo this end, we are currently conducting experiments to verify whether the proposed L-PINN exhibits compatibility issues with various model architectures. While the full architecture of PirateNet [1] is not yet publicly available, we are testing its main components, such as random Fourier blocks, attention mechanisms, and residual blocks, to assess their compatibility. We will report the results along with the completed experiments within a few days.\\n\\n\\n[1] Wang, S., Li, B., Chen, Y. and Perdikaris, P., 2024. PirateNets: Physics-informed Deep Learning with Residual Adaptive Networks. arXiv preprint arXiv:2402.00326.\\n\\n---\\n\\n> **Q1, 2** \\n\\nYou are absolutely correct. We acknowledge the vulnerabilities in the previously derived definitions and derivations of the feature vector. To address this, we have included detailed mathematical expressions and estimation methods in **Appendix A**. Thank you for your comment; it has been invaluable in strengthening our theoretical foundation.\", \"to_briefly_explain_the_feature_vector_extraction_process\": \"We represent $\\\\mathcal{R}_{\\\\theta}(\\\\mathbf{x})$ as [$g(f)$]$(\\\\mathbf{x})$, and apply a first-order Taylor approximation to [$g(f)$]. The Taylor expansion utilized at this point introduces the concept of the Fr\\u00e9chet derivative $D_g(f)$ in function spaces. The detailed construction and explanation related to this are provided in **Appendix A**.\\n\\nAs a result, we obtained experimental outcomes that align more clearly with **Assumption 3.2** and **3.3** compared to previous derivations. Moreover, while this result represents a local approximation of feature vectors, we observed that it reasonably operates even for non-linear PDEs.\"}",
"{\"comment\": \"Dear Reviewer Zj8i:\\n\\nWith less than 24 hours remaining in the rebuttal period, we kindly request your feedback on our responses. Additionally, we would greatly appreciate it if you could briefly indicate whether our replies sufficiently addressed your concerns to the extent that it might influence your decision positively, or if there will be no change in your evaluation. Thank you in advance for your time and consideration!\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"title\": \"Regarding the rebuttal response format\", \"comment\": \"We sincerely thank the reviewers for your detailed and insightful comments. We have made every effort to address as many of the raised issues as possible, and the updated version of the paper has been uploaded.\\n\\nOur rebuttal responses reference this updated version of the paper. Consequently, the locations of specific content addressing prior concerns may have shifted slightly, and we kindly ask the reviewers to take note of this.\\n\\nAdditionally, we will incorporate further experimental results into the paper whenever possible before the rebuttal deadline.\"}",
"{\"comment\": \"Thanks for continuing the discussion. I agree that the statement is factually true (for these 5 seeds) but my impression while reading the paragraph was somehow that the proposed method is more robust to an increase in architecture complexity, which however is limited to 10 layers (and now less evident when considering the median instead of the mean). Maybe this impression was given by the next sentence \\\"These results indicate that high residual methods are more susceptible to increasing model complexity, whereas L-PINN remains robust.\\\" which I think it's a bit stretched.\\n\\nI would recommend the authors to adapt these two sentences such that the claim is clearly restricted to the specific case under study. And I would agree that using more seeds would make the intended statement clearer (but still won't make it more general).\"}",
"{\"comment\": \"Honestly, I think that the statement \\\"As illustrated in Figure 4-(a), it can be observed that only L-PINN and RAD demonstrated stable performance when 10 hidden layers were used,\\\" was a bit stretched also before, since it was anyway tested on a single dataset and the trend was shown only for 10 layers while for other number of layers all methods are comparable. On the other hand, results over different learning rates show stable performance over a wider range of values.\\n\\nTherefore, I don't believe that experiments with 10 seeds would dramatically change our insights. My current understanding (also from this experiment with different number of layers) is that the proposed approach is more stable across different choices of initial learning rates.\"}",
"{\"summary\": \"In this paper the authors highlight some potential shortcomings of residual-based methods for training PINNs and the lack of theoretical understanding thereof. In particular, the authors show theoretically that convergence requires a tighter upper bound on the learning rate. Furthermore, the authors propose a novel algorithm to train PINNs that exploits Langevin dynamics. The main idea is, instead of resampling the collocation points at each iteration (proportionally to the residual loss), to update existing sample position based on residual loss gradient.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"the paper is clearly written and easy to read.\", \"the idea of using Langevin dynamics to update sample points instead of resampling is elegant.\"], \"weaknesses\": [\"while the theoretical contribution on the learning rate is interesting, it seems to be not very useful in practice. Based on the ablation study in Figure 3a and 4a it seems like that the steepness changes only slightly and that picking a smaller learning rate would already help convergence of current methods.\", \"furthermore, as the authors highlighted in the paper, Langevin dynamics comes at the price of two new additional parameters, which need to be fine tuned. As the authors themselves note, experiments in Appendix E suggest that the choice of these two parameters highly influence the performance of the proposed method.\"], \"questions\": [\"Theorem 4.1 relies on an asymptotic limit for reaching collocation sample population. Therefore, I expected that performance would improve as the number of Langevin steps increases. Can you elaborate on why this is not the case based on the results in Appendix E? (the authors note that this behaviour is particularly true for small $\\\\beta$ but also in the other cases it is hard to see an improvement if $l_L$ is increased). Somehow these results seem to imply that reaching collocation sample population is not useful in practice.\", \"It would be interesting to see quantitatively how much the choice of the step size $\\\\tau$ in L-PINN affects convergence. It might be useful to have a comparison along the lines of the evaluation in Appendix E for $\\\\beta$ and $l_L$, but in this case as $\\\\tau$ varies.\", \"Instead of sampling new collocation points to find high-residuals, the proposed approach updates points according to the residual gradient. Intuitively, this seems to be more intense computationally, since sampling can usually be done rather cheaply. Did you run any experiments in this direction?\", \"I find Figure 5 a bit confusing. Could you clarify whether you are showing the error with respect to the solution or the solution itself?\"], \"some_minor_typos\": [\"line 88: I would add $R_\\\\theta(x)$ after \\\"residual\\\" such that it is clear what $R_\\\\theta(x)$ is\", \"line 105: should use a comma instead of a point before \\\"i.e.\\\"\", \"line 148: missing a point before \\\"Additionally\\\"\", \"line 430: for L-PINN, RAD, R3 and L* \\\"exact solution\\\" should be \\\"predicted solution\\\"\", \"line 460-466: results fro the Burgers' equation with 8 layers for Random-R and R3 should also be in boldface\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I thank the authors for the follow-up response.\\n\\nConcerning the first question, I think now the discussion is becoming quite focused on the precise learning rate and scheduler used. I think that it might have an influence but it is not the most interesting aspect here, at least in my opinion.\\n\\nMy insight from Figure 4a is that given a fixed learning rate (and scheduler) all methods perform similarly well across number of layers (except for R3 with 10 layers). The result you had before showed worse performance on the mean value (instead of the median) because of one outlier. This was the reason that originally motivated the sentence \\\"As illustrated in Figure 4-(a), it can be observed that only L-PINN and RAD demonstrated stable performance when 10 hidden layers were used\\\", which however I think now it is not supported by the updated results.\\n\\nMaybe your insight was coincidentally correct and your algorithm is indeed more stable across different number of layers, but at the moment I don't think there is evidence supporting the claim.\\n\\nI don't believe that additional experiments with a different learning rate will be particularly enlightening for this matter, but of course I might be wrong.\"}",
"{\"comment\": \"Dear Reviewer Zj8i,\\n\\nThank you for your thoughtful and detailed feedback! Your insightful observations have been incredibly helpful in clarifying key aspects of our analysis. \\n\\nTo address your main question, \\\"Unlike the previous version, the results for $n_\\\\text{layers} = 10$ appear more stable for $L^{\\\\infty}$ and Random-R, similar to RAD,\\\" I would first like to note that we used 5 random seeds when generating **Figure 4**, just as we did for **Table 1**. In the earlier version, the representative values plotted in the figure were based on the **mean**. However, in the updated version, we used the **median** as the central metric for the boxplot. As a result, the values for $L^{\\\\infty}$ and Random-R appear more stable than RAD. \\n\\nTo provide a clearer understanding, we have included the raw results for $n_\\\\text{layers} = 10$ based on the 5 seeds: \\n\\n- L-PINN: [1.03728259, 1.39095485, 1.17176408, 0.93006007, 0.76291417], mean = 1.058, median = 1.037 \\n- R3: [1.58583503, 49.7361213, 35.45759022, 35.65890789, 49.9127984], mean = 34.470, median = 35.658 \\n- RAD: [1.65305007, 1.31262923, 1.17702512, 1.15792109, 1.5187026], mean = 1.363, median = 1.312 \\n- **$L^{\\\\infty}$**: [49.25066531, 1.20032271, 2.64070798, 0.89049591, 0.77396245], **mean = 10.951, median = 1.200** \\n- **Random-R**: [1.08244075, 1.47837037, 51.34871602, 2.19732579, 1.34252282], **mean = 11.489, median = 1.478** \\n\\nAs shown above, the discrepancy between the mean and the median caused some confusion, particularly for $L^{\\\\infty}$ and Random-R. Additionally, to enhance the clarity of the plots, we generated the boxplots containing outlier points.\\n\\nYour observation was spot-on and helped us identify and address this issue. To reduce potential misunderstandings for readers, we have updated the figure caption to explicitly mention the use of repeated seeds and added a note about random seeds at the beginning of the ablation study section. Furthermore, we clarified this aspect by incorporating color coding (blue) in the updated version.\\n\\nAdditionally, we strongly resonate with your take-home messages (regarding learning rate and computational feasibility), and we are glad that our intentions have been effectively communicated.\\n\\nOnce again, thank you for your kind and perceptive comments\\u2014they have greatly enhanced the clarity of our work. If you have any further questions or concerns, please do not hesitate to leave a comment!\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"comment\": \"We sincerely thank you for taking the time to review our manuscript and provide insightful comments. Your suggestions have greatly contributed to enhancing the presentation of our work. Our detailed responses are provided below.\\n\\n---\\n\\n> **W1** \\n\\nWe apologize for the confusion. All the settings we utilized throughout the manuscript are based on the default settings provided in the publicly available codes of the baseline papers. Additional details have been included in **Appendix F.2**.\\n\\n---\\n\\n> **W2-1** \\n\\nThe primary message we aimed to deliver through this paper focuses on the stability issues arising from model complexity, and most of the experiments we designed were intended to support this point. Due to this focus, we could not explore applications to more complex PDEs. However, in **Appendix I**, we discussed the compatibility of the proposed L-PINN with 2D PDEs and demonstrated its superiority over other adaptive sampling schemes.\\n\\nWe reason that these experimental results can be explained by the following factors detailed in **Appendix I**. In summary, we attribute this to the inaccuracy of Monte Carlo integration, $\\\\mathbb{E}|\\\\mathcal{R}_\\\\theta(x)|^k$, during the sampling process. This inaccuracy becomes more pronounced in higher dimensions with a limited number of collocation points. Unlike other methods that directly rely on Monte Carlo integration, L-PINN avoids this approach, contributing to its robust performance in such scenarios.\\n\\n> **W2-2, 3** \\n\\nAdditional experimental results related to this issue have been included in **Appendix E.2**. Although the effect is less pronounced compared to depth, we observed similar phenomena with respect to width.\\n\\n> **W2-4** \\n\\nI also agree with that this is an important point. However, addressing more complex problems at this stage would be challenging. We believe it is crucial first to validate whether L-PINN faces compatibility issues with architectures other than MLPs. In this regard, we are conducting experiments using random Fourier blocks, modified MLP (using skip connections), and attention mechanisms. We will attach the additional results once the experiments are completed in a few days.\\n\\n---\\n\\n> **Q1** \\n\\nYou are correct that the explanation might cause confusion in the flow of the text. It is true that RAR could be considered a special case of RAD and can be understood as an attempt to address its limitations. However, the criterion we aimed to distinguish was whether the algorithm fundamentally estimates the residual sampling distribution. Rather than focusing on the relationship between algorithms, we aimed to categorize them based on their operational methods. We will clarify this distinction to make the separation more apparent.\\n\\n---\\n\\n> **Q2** \\n\\nWe apologize for the confusion. The message we intended to convey was to emphasize the lack of theoretical reasoning for the same phenomenon. However, we agree that this overlaps with the first unresolved question. We will combine the two to present the idea more cohesively.\\n\\n---\\n\\n> **Q3** \\n\\nWe have identified and corrected the typographical errors.\\n\\n---\\n\\n> **Q4** \\n\\nTo address your point regarding the distributional characteristics of the results and the presence of randomness, we have included a boxplot for improved visualization in **Figure 4**. Furthermore, we analyzed the performance of the algorithms for learning rates between 0.002 and 0.003, as detailed in **Figure 4-(d)**. The results show that as the learning rate approaches 0.003, the algorithms begin to exhibit instability sequentially. At 0.003, all algorithms except L-PINN consistently fail to converge, and at 0.004, all algorithms fail to converge with virtually no randomness.\"}",
"{\"comment\": \"Thank you for the responses. Most of my concerns has been well addressed. I am impressed by the new experiments. I have increased my score from 6 to 8.\"}",
"{\"comment\": \"Dear Reviewer Zj8i,\\n\\nThank you for the prompt feedback. Your summary has greatly clarified the direction we need to focus on for further discussion.\\n\\nFirst, I believe the notion of **\\\"stable performance\\\"** must be clearly defined. Based on my understanding of your argument:\\n\\nIn the earlier version (without raw results based on 5 seeds), when evaluating the results based on the **mean**, it seemed that the error across all seeds would be approximately 10\\u201311. In this case, the statement, *\\\"As illustrated in Figure 4-(a), it can be observed that only L-PINN and RAD demonstrated stable performance when 10 hidden layers were used,\\\"* was valid. However, with the updated results based on the **median**, when excluding outliers, this statement seems invalid because performance is similar across algorithms.\\n\\nI think this is the point where we need to reach a consensus. To summarize, the experimental results so far, we can categorize them into three cases:\\n1. exhibits failures and does not achieve desirable performance (R3), \\n2. occasionally exhibit failures but achieve desirable performance ($L^{\\\\infty}$ and Random-R), \\n3. exhibit no failures and achieve desirable performance (L-PINN and RAD).\\n\\nIt seems that your interpretation might be grouping categories 2 and 3 together to denote **stable performance**, whereas, in our case, the term **stable performance** refers to the third category only.\\n\\nIf our understanding is correct, would it be helpful to expand the experiments by increasing the number of seeds for the 10-layer case as evidence supporting the claim? This might allow us to more accurately measure the frequency of failures for each algorithm.\\n\\nOnce again, thank you for your swift feedback and valuable comments. If you have further questions or suggestions, please do not hesitate to share them!\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"comment\": \"Thank you for your thorough review and helpful suggestions. Below, we provide responses to your comments.\\n\\n> **W1**\\n\\nWhile shallow neural networks may achieve similar results in some scenarios, **Figure 4-(a)** shows that the relative $L^2$ error improves consistently for all sampling methods as the number of layers increases, even without learning rate decay. For L-PINN, **Appendix G** further demonstrates improved performance across most $\\\\beta$ values with deeper architectures, suggesting that increasing layer depth is a viable strategy, even for simple 1D PDEs.\\n\\nAdditionally, **Table 1** highlights L-PINN's stable and robust performance with deeper architectures, unlike other algorithms such as Random-R and RAD, which exhibit instability in specific cases (e.g., Allen-Cahn and KdV equations). These findings underscore the reliability of L-PINN across various PDEs without compromising stability.\\n\\n\\n---\\n\\n> **W2**\\n\\nTo address your concern regarding implementation details, we have added Python-style pseudo code in **Appendix K** to illustrate the specific considerations made during the implementation phase. This addition aims to provide a clearer understanding of the methodology and its practical application, ensuring transparency and reproducibility.\\n\\n---\\n\\n> **W3**\\n\\nTo provide clarity on the computational efficiency of the proposed method, we have included a detailed analysis in **Appendix H**. In this section, we measured the time required for neural network training over 1000 iterations while varying the number of collocation points (100, 1000, 10,000, 50,000, and 100,000) and increasing the problem's dimensionality from 1D to 2D. The results are organized in a table, presenting the time taken (in seconds) for each configuration. This analysis demonstrates how the number of collocation points and problem dimensions affect the computational time during training.\\n\\n---\\n\\n> **W4**\\n\\nIn [1], RAR-D achieves more efficient computation cost compared to RAD by gradually concatenating sampled points to the initial random points (half of the fixed number of collocation points) rather than using the fixed number of collocation points from the beginning. However, since the initial points are not updated, this approach can result in solution error degradation compared to RAD. Furthermore, as demonstrated in [1, 2], RAD consistently outperformed RAR-D in terms of Relative $L^2$ error for solution accuracy. Based on this analysis, we believe that comparing with RAD alone sufficiently covers the performance aspects of RAR-D to a reasonable extent.\\n\\n[1] Wu, Chenxi, et al. \\\"A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks.\\\" Computer Methods in Applied Mechanics and Engineering 403 (2023): 115671. \\n[2] Daw, Arka, et al. \\\"Mitigating propagation failures in physics-informed neural networks using retain-resample-release (r3) sampling.\\\" ICML 2023.\\n\\n---\\n\\n> **W5**\\n\\nUpon reviewing the issue you pointed out, we realized that the results for prediction errors were mistakenly labeled as prediction values. This has been corrected, and the updated version has been uploaded for your review.\\n\\n---\\n\\n> **W6**\\n\\nThe primary focus of our study is on scenarios where the learning rate is relatively high, as there is limited prior research reporting results for lower learning rates. To address this gap, we conducted experiments under lower learning rate conditions and included the findings in **Figure 4-(c)** of the manuscript. Based on the results, we observed that a learning rate of at least 0.0005 is necessary for various sampling methods to undergo a meaningful learning process.\\n\\n---\\n\\n> **L1**\\n\\nRegarding the complexity of PDE problems, while it was challenging to explore a wide range of cases within the limited time frame, we conducted experiments on Burgers' 2D and Heat 2D problems and included the results in **Appendix I**. These experiments demonstrated that L-PINN outperforms other baseline algorithms in terms of performance.\\n\\nWe reason that these experimental results can be explained by the following factors detailed in **Appendix I**. In summary, we attribute this to the inaccuracy of Monte Carlo integration, $\\\\mathbb{E}|\\\\mathcal{R}_\\\\theta(x)|^k$, during the sampling process. This inaccuracy becomes more pronounced in higher dimensions with a limited number of collocation points. Unlike other methods that directly rely on Monte Carlo integration, L-PINN avoids this approach, contributing to its robust performance in such scenarios.\\n\\n\\n---\\n\\n> **L2**\\n\\nOur scope primarily focuses on the mathematical analysis of the weaknesses in adaptive sampling methods and the experimental validation of those findings. While we have not explored multiscale systems in this work, we acknowledge their importance and will consider further verification and improvements to extend the applicability of our approach to such systems in future research.\"}",
"{\"comment\": \"Dear Reviewer Zj8i:\\n\\nRegarding Q2, we have included additional experimental results in Appendix G.2.\\n\\nAs the author-reviewer discussion period is nearing its conclusion, we kindly request you to review our responses at your earliest convenience. Should you have any additional questions or comments, we will make every effort to address them before the discussion period ends.\\n\\nWe sincerely appreciate your time and valuable feedback. We look forward to hearing from you soon!\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"comment\": \"Dear Reviewer SaWy,\\n\\nWe\\u2019re glad to hear that our experiments addressed your concerns. In particular, your points regarding compatibility with other model architectures and changes from the perspective of width were aspects we hadn\\u2019t considered thoroughly. Thank you for your positive feedback!\\n\\nBest regards, \\nICLR 2025 Conference Submission1774 Authors\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}"
]
} |
EOPLy80bBm | Disentangling the Roles of Representation and Selection in Data Pruning (for Fine-Tuning) | [
"Yupei Du",
"Yingjin Song",
"Hugh Mee Wong",
"Daniil Ignatev",
"Albert Gatt",
"Dong Nguyen"
] | Data pruning, the process of carefully selecting a small subset of training data, has been shown to improve both training efficiency and performance. It typically involves two steps: (1) obtaining a representation for each instance, and (2) applying a selection algorithm using these representations. However, the distinct roles of these two steps, as well as their interactions, remain unclear. To address this, we conduct a systematic study of data pruning, focusing on NLP fine-tuning. Our theoretical and empirical findings reveal that data representation often plays a more fundamental role than the selection algorithm: gradients, despite being computationally expensive, provide stronger pruning signals than other representations, making gradient-based methods consistently outperform cheaper alternatives. We also demonstrate that different selection algorithms excel in specific scenarios but are heavily influenced by the chosen representation. These insights provide clear guidelines for future research and practical applications. | [
"data pruning",
"fine-tuning"
] | https://openreview.net/pdf?id=EOPLy80bBm | https://openreview.net/forum?id=EOPLy80bBm | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"u8mgWZ3mSh",
"emkKjFCbAn",
"dLuzklRIyE",
"ObtxI2J7Dl",
"Nt1WBaINgR",
"3w8KNclfZ8"
],
"note_type": [
"official_comment",
"official_review",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1732301858718,
1729472774310,
1732302077611,
1730678830795,
1730342765933,
1730516107252
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3587/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3587/Reviewer_Lgd8"
],
[
"ICLR.cc/2025/Conference/Submission3587/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3587/Reviewer_FAXu"
],
[
"ICLR.cc/2025/Conference/Submission3587/Reviewer_UMtE"
],
[
"ICLR.cc/2025/Conference/Submission3587/Reviewer_CmBp"
]
],
"structured_content_str": [
"{\"title\": \"Thank You\", \"comment\": \"We sincerely appreciate the thoughtful and detailed feedback provided by the reviewers. We will use their insights to improve our work and prepare a stronger version in the future. Thank you for your time and effort.\"}",
"{\"summary\": \"This work is about data pruning methods\\u2014algorithms that score individual datapoints and retain small/moderate subsets to maximize model performance. The authors suggest that certain methods can be disentangled into two stages, 1) deriving a representation for each data point and 2) applying a scoring/selection mechanism based on those representations. The paper provides an overview of these methods, some analysis of the relationships between certain implementation choices, and then performs some empirical comparisons. The goal is to provide a deeper understanding of these methods and guidelines for future usage.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper overviews a large number of related works, and provides a short and simple description of many of them\", \"It seems to introduce a previously unrecognized perspective in this line of work, that many methods can be disentangled into separate choices for their underlying representation and their scoring/selection rule\", \"The theory suggests some similarity between the different representation choices (hidden states, training dynamics and gradients)\", \"The experiments compare several existing methods, along with new methods combining different representation and scoring combinations that were not previously explored\"], \"weaknesses\": [\"One of the main points of the paper is disentangling the two main implementation choices for certain pruning methods (the representation and scoring rule). There are a couple aspects of this overview of related works that could be improved:\", \"At the beginning of the paper, it was difficult to tell what the authors meant by their second stage. The second paragraph includes the following text: \\\"second, selecting instances based on these representations given a data budget (e.g., 30% of the training set), according to a selection algorithm.\\\" Perhaps you could provide a quick example, like the distance to cluster centroid idea, to clarify that the free parameter in this stage is not simply the amount of data to retain.\", \"The overview of methods at times reads like a laundry list with too little structure (mainly section 3.1). To emphasize that you're identifying consistent implementation choices within each method, it would help to introduce notation for each one (the representation and scoring approaches) and include tables showing the options demonstrated by these methods. Otherwise, you're putting too much burden on the reader to remember each method and perform this mapping themselves.\", \"Disentangling between the two stages would be especially interesting if you can identify new combinations of implementation choices that are promising new methods. I believe this was done to some extent in figure 1 (an experiment that does not measure performance), and a bit more in figure 4. Can you clarify whether any new useful algorithms were discovered, or whether more exploration of this type seems worthwhile? The paper did not seem as focused on deriving new and improved methods as I would have expected, and instead focused on describing differences.\", \"The paper ultimately gives the sense that many of the methods surveyed here basically don't work, or only work in specific settings (mainly the methods in section 2). Is there any way to focus the discussion on the methods that matter most, or provide some context to the reader about what works well or is popular in practice?\"], \"there_are_a_couple_remarks_about_related_works_that_seem_off\": [\"In lines 117-118, the authors mention Pruthi et al. as an example of using influence functions to estimate datapoint importance. This work does not describe itself as using influence functions (it compares to them), so is this a novel interpretation you're providing? If so, that may merit some explanation.\", \"In lines 125-127, the authors refer to Feldman & Zhang as an example of using gradients as a measure of self-influence. This work doesn't use gradients, it compares predictions after retraining with different datasets.\", \"About the theory in section 3.2:\", \"One of the comparisons you make here is between $|p_\\\\theta(y_i \\\\mid x_i) - p_\\\\theta(y_j \\\\mid x_j)|$ and $||g_w(x_i, y_i) - g_w(x_j, y_j)||_2$. Based on the various methods you described previously, I'm not sure if either of these are used by any existing works? The second is at least related to gradient inner products (relevant to influence functions, TRAK, LESS), but the first quantity doesn't seem like it has any potential for effective data pruning. Why is it a useful point of comparison?\", \"Related to my request above for putting each method's implementation choices in a table: because that isn't currently in the paper, it's hard to keep track of which methods this section might apply to, even if indirectly. The summary says that the main takeaway is there are similarities between analyzing training dynamics, gradients and hidden states; that seems to imply certain existing methods are more similar than people realize, but I can't tell which methods those would be. Anchoring this subsection in specific methods seems important, otherwise it's a bit disconnected.\", \"The conclusions from the simulated experiments in section 3.3 are not very surprising. They are the following: 1) methods with the same objective (e.g., finding difficult examples) select different datapoints depending on their representation/scoring (of course), 2) the same scoring rule can select different datapoints depending on which representation they use (naturally), but 3) sometimes not (for the sole case of the diversity-prioritizing method, which makes sense). These experiments are a bit more like sanity checks than providing new insights.\", \"It would have been nice if the paper made generalizable claims about which methods work well, or perhaps components of methods that reliably work well (e.g., a representation that often works well with different scoring criteria). The closest we get to that is section 4, which compares a handful of methods on a few datasets. Although we get a sense of which methods score datapoints similarly (only a couple pairs of methods), and we see that a couple methods consistently don't work (those based on hidden states), the paper overall does not go very deep into the pros/cons of different implementation choices or attempt to make generalizable claims. Perhaps you could clarify what the field should take away from this newfound relationship between methods, particularly regarding the development of improved algorithms (since all of the methods tested here fail at some point in the experiments)?\"], \"a_few_other_points_about_the_experiments\": [\"In lines 385-386: these points about Spearman correlation are also true for Pearson correlation, are the results consistent across metrics?\", \"You included an extra point on the x-axis for LESS in fig 2a-b.\", \"The text does not describe how LESS and memorization produce relatively similar scores (fig 2a-b).\", \"Why do you include LESS-balanced in fig 2d when none of the others methods have their balanced counterparts here? It might also make more sense to put fig 2f next to fig 2d.\", \"Why do you use LESS-OOD in fig 2e? If this is different than the normal application of LESS here, perhaps you can show the normal usage as well?\"], \"questions\": \"Several questions are mentioned in the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper investigates the important research problem of understanding the roles of representation and selection strategy in data prunning problems. Data pruning is often conducted in some representation space, but the choice of representation is often different for different methods or in different use cases. The respective role for the choice of representation and the data prunning strategy has long been unclear. This work aims to contribute a more systematic study towards this problem.\\n\\nThe paper first reviews and organizes a list of commonly used data selection/prunning methods and categorizes their representations spaces into \\\"Training dynamics\\\", \\\"Hidden states\\\", and \\\"Gradients\\\". Then, the paper analyzes different representations and their interaction with data selection strategies with derivations and simulations on stylized models. Finally, the paper conducts a number of experiments on 3 NLP fine-tuning tasks with pre-trained models, DeBERTaV3 and OPT, and draws a number of conclusions on the findings. The paper also conducts ablation studies on fine-tuned embeddings and additional tasks, etc.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is very well-positioned. The problem being investigated, understanding the respective role for representation and data selection strategies, is crucial and highly relevant. Contributions from this angle are very much anticipated.\\n\\nThis paper serves as a nice, compact study. It studies a meaningful and timely problem and is self-contained and reasonably structured. The data pruning/selection methods implemented in this work are diverse and representative, covering a variety of seminal works. The literature review is also favorable.\\n\\nThe paper has a nice combination of theoretical analysis, synthetic simulations, and empirical studies on commonly used benchmarks. The findings and conclusions are in the right direction.\", \"weaknesses\": \"The manuscript being reviewed contains a number of inconsistencies and ambiguities in multiple places. Some key concepts are not clearly distinguished from each other and are seen used interchangeably. Also, some modifications to the baseline methods seems go beyond the scope of the original works, which essentially becomes new methods rather than the ones with the original names. Experiments are confined to fine-tuning for NLP tasks, which are not what most of these methods are proposed for. Experiments are all in small scales. Theoretical analysis does not provide many in-depth insights. The conclusions and findings are in the right direction and does not tell much beyond the prior hypotheses.\\n\\nSome detailed feedback. \\n\\n1. The title of the paper says \\\"for fine-tuning\\\", but it is not referred to or discussed about throughout the paper except the experiments are all fine-tuning tasks.\\n\\nData pruning for fine-tuning could potentially be a quite different problem. The model already has prior knowledge, so while selecting data, one may want to avoid samples the model is already doing well, (or not). For example, in instruction-tuning tasks for LLMs, following [Lima: Less is more for alignment. C Zhou et al.], a wealth of works have been proposed to achieve comprable performance with a small fraction of instruction samples.\\n\\nNone of the baseline methods implemented in this work was proposed for such use cases. Rather, these methods focus on the case of training from scratch. Mentioning fine-tuning in the title or confining the experiments to such tasks disconnects with the main content of this paper.\\n\\nBesides, for fine-tuning, the practical challenge is often the lack of high-quality data rather than the computation cost. If one considers large-scale fine-tuning, which may become continual learning. It may also be a problem of its own flavor. Ref: [Scaling laws for transfer. D Hernandez et al.]\\n\\n\\n2. This paper uses the notions of data pruning and data selection problems interchangeably. There are several important distinctions.\\nData pruning often refers to the case with an (over)abundance of data where the model can achieve comparable performance by training discarding redundant training data. The goal is to retain the model's original performance while removing as many data as possible. It is typically done in a one-shot manner. Except for Memorization, which was proposed for understanding data influence, all other baseline methods implemented in this work belong to this type. That's why they often prioritize hard samples or remove duplicate samples since those samples provide little marginal contribution at a large-data regime.\\n\\nOn the other hand, data selection for machine learning studies the general meta-learning problem of how to select training data to optimize certain objectives for the resulting model, which could be performance/efficiency/fairness, etc. Muti-round data selection methods are often categorized as active learning, which has a rich field of literature. If the goal is to achieve the best possible performance with a small data budget (such as 20% of the original dataset), the task is often referred to as coreset selection.\\n\\nFurther, noisy data selection is a diagnostic problem and not the main consideration for data pruning. This kind of problems are often dealt with data influence or data valuation methods, which try to understand/quantify individual data point's contribution to the set objective (e.g., model performance).\\n\\nConsider \\n- Clearly define these terms early in the paper\\n- Consistently use the appropriate term throughout\\n- Discuss how their findings may differ for data pruning vs data selection tasks\\n\\n3. The paper could benefit from restructuring some of the sections. For example, Section 3.2, the information is not very straightforward. The style is a mix of elaboration and derivations. If it is intended to be theoretical analysis, structuring it in Theorem-Remark style may substantially improve both its rigor and clarity.\\n\\nConsider presenting the key theoretical results as formal theorems or propositions, followed by explanations and implications. This would help separate the main analytical insights from the supporting details.\\n\\n4. Main conclusions are overly ambiguous. Conclusion 1: \\\"data pruning methods may not be effective\\\". This does not tell much more than intuitions\\u2013most methods may not always work. A crucial research question is \\\"when these methods are effective and when they are not\\\", which may reveal intrinsic patterns which are previously unknown and guide the research for future improvements. Similarly for the conclusion, \\\"representations are more important\\\". Why is it more important? Is it always the case?\\n\\nConsider\\n- Provide more specific conditions under which data pruning methods are or are not effective\\n- Quantify the effectiveness (or lack thereof) of different methods in various scenarios\\n- Discuss the implications of these findings for future research directions in data pruning\", \"questions\": \"1. This work makes it default to use \\\"representations from the model that we are training. This allows us to analyze signals that directly reflect its learning behavior.\\\" This is a very strong assumption, basically requiring the embedding to be perfectly aligned with the target case/model/data. This may not always be possible in practice, especially when data selection needs to done in one-round rather than in online/active-learning fashion. The mismatch between embedding space and target tasks could significantly affect the effectiveness of data selection pipelines. Given that the paper aims to study this problem as its main focus, it is not very reasonable to skip this discussion here.\\n\\n2. The paper considers \\\"memorization\\\" as a \\\"gradient\\\"-style method and argues \\\"Using the TracIn influence function, this self-influence score can be estimated as the gradient norm.\\\" This is not proposed in the original paper [Feldman and Zhang, 2020]. Is this originally proposed in this work? It is not an established practice to approximate memorization scores with first-order gradients. As mentioned in the training dynamics part of the paper, this actually approximates only the final step gradients and may or may not reflect what the model has picked up from this sample during training. Please clarify if this is a novel interpretation or adaptation of the original method. If it is novel, justify why this approximation is valid and discuss its limitations; if it is not novel, provide proper citations for this interpretation.\\n\\n3. The experiments on \\\"fine-tuning hidden states\\\" may not tell the full story. For example, for the task of hateful speech recognition with pre-trained BERT models, if using vanilla BERT models which have no knowledge of the target tasks, its embedding space distance may not be relevant to whether a sample is considered \\\"hateful\\\" or not. But after training the model on this classification task, its final-layer embedding will push samples with different labels to different clusters which are often linearly separable. Conducting data selection on such embedding space is likely to yield very different results than using task-agnostic embeddings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper investigates the roles of data representation and selection algorithms in data pruning for NLP fine-tuning tasks. The study divides existing data pruning methods into categories based on data representation (training dynamics, hidden states, and gradients) and selection objectives (maximizing difficulty, diversity, or validation performance). Through theoretical analysis, they show that gradients and prediction probabilities encode more information than hidden states as data representations. Through extensive experiments on both synthetic datasets and NLP datasets, the authors claim that data representations play a more significant role than the selection algorithms, largely affecting the samples selected. They discover that gradient-based representations achieve better performance under their single-task NLP fine-tuning scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides a broad review of current data pruning methods, considering various representations (training dynamics, hidden states, gradients) and selection objectives (difficulty, diversity, validation performance), which helps readers understand the current landscape of data pruning.\\n2. The paper offers a theoretical analysis of three data representations\\u2014hidden states, prediction probabilities, and gradients\\u2014revealing the signals captured and encoded in the similarities of these representations.\\n3. It conducts an extensive experimental comparison of data representations and selection algorithms, evaluating them on both synthetic and NLP datasets. The paper emphasizes that data representations tend to be more fundamental than selection algorithms in determining the quality of pruned data and gradient-based representations generally perform better. This provides practical guidance for data pruning for NLP task-specific fine-tuning.\", \"weaknesses\": \"1. The paper does not provide a systematic review of existing data pruning methods. The authors focus only on the contents of the methods (including data representations and selection algorithm), while overlooking the specific scenarios these methods are designed for. This narrow focus makes their attempt to disentangle representations and selection algorithms appear artificial. In practice, data pruning methods are designed as unified systems, with both components tailored for targeted scenarios. The authors\\u2019 approach of swapping representations and applying them in a single-task setting may diverge from the methods\\u2019 original intent, limiting the credibility of universal conclusions.\\n2. Although the authors note in the Limitation section that this study focuses on single-task fine-tuning, the experiments include methods that are designed for other scenarios. For instance, S2L is developed for domain-specific fine-tuning, and LESS targets downstream task-specific fine-tuning. Comparing these methods in a uniform task-specific setting disregards their intended contexts, creating an unfair comparison that undermines the validity of the findings regarding the superiority of data representations or methods.\\n3. Some prior work is misinterpreted. In section 2.1, methods are characterized into maximizing data difficulty as the selection objective. However, the mentioned work When Less is More by Marion et al. finds that data of moderate difficulty is most beneficial, which contradicts the paper\\u2019s claim. In section 3.3, the authors list S2L as \\u201crepresentation-agnostic\\u201d. However, the data representation (training trajectories) used in this algorithm is specially designed to address domain-specific fine-tuning challenges where hidden states may be less effective.\\n4. While the empirical results are valuable, they do not provide significant new insights. In 3.3 and 4.2, the authors \\u201cfind\\u201d that different selection methods with the same difficulty objective select different samples. However, as \\u201cdifficulty\\u201d is not a fixed data property but a human-defined measurement that differs across methods, this variation is unsurprising. Moreover, LESS, which maximizes validation performance, is naturally expected to have better performance under the authors\\u2019 single-task fine-tuning setting, but this may not generalize to other unexplored settings.\\n5. The presented work focuses solely on evaluating existing approaches. No new data pruning methods or frameworks are proposed, which limits its originality and methodological contribution.\", \"questions\": \"1. While the paper focuses on task-specific fine-tuning and excludes general instruction fine-tuning, could the authors discuss whether the findings could extend to broader contexts? Or are the findings limited to specific tasks?\\n2. Methods like LESS and S2L were originally applied to billion-parameter models, but the current experiments use models with only up to 350M parameters. Have the authors considered testing on larger models, such as LLaMA-7B or at least Pythia-1B, to improve the study\\u2019s credibility and applicability?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors conduct a review of data pruning methods. They identify two key components of these methods: the representation of the data, and the pruning strategy applied on top of this representation. The representations are divided into three broad categories: gradient based, hidden state based, and training dynamics based. The pruning strategies also have three broad categories based on what they seek to maximize: diversity, difficulty, or performance (on a validation set). They conduct experiments using these pruning strategies on synthetic 2D Gaussian mixture datasets to show the impact different combinations of representation/pruning strategy have on the final selection. Finally, they test the different pruning strategies on several real-world NLP tasks and conclude that gradient-based methods tend to have the best performance, albeit at the cost of computational efficiency.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Reproducibility studies or extensive comparisons of existing techniques have very high practical value for the ML community. Especially as datasets and models continue to grow in size, strategies for speeding up training, removing low quality data, and reducing storage costs (such as data pruning) will be increasingly important. Thus, this topic is highly relevant to the ICLR community.\\n\\nThe authors summarize and explain the methods under study very clearly. The related work section is also extensive and clearly written.\\n\\nThe paper also offers a key insight into why LESS, which was generally the most performant pruning method, failed on the CAD dataset due to a label imbalance which was exacerbated by the method (lines 448-452). They then propose a simple alteration (label balancing) which restores the performance of LESS.\", \"weaknesses\": \"There is a flaw in the reasoning for the theoretical analysis on lines 243-244. The authors claim that because the sigmoid function is smooth and monotonically increasing, when $|\\\\sigma(x) - \\\\sigma(y)|$ is small it should also be the case that $|x-y|$ is small, but this is not true. In fact, since the sigmoid flattens out at $\\\\pm\\\\infty$, the difference in sigmoids can be made arbitrarily small while the difference in arguments is arbitrarily large. (E.g., $|\\\\sigma(x) - \\\\sigma(x^2)|\\\\to 0$ but $|x-x^2|\\\\to\\\\infty$ as $x\\\\to\\\\infty$.) I don't see any easy way to fix this flaw in the reasoning.\\n\\nThe motivation for the theoretical analysis conducted in Section 3.2 is not clear, as the quantities studied (difference in correct output probabilities or gradients) aren't actually used by any of the pruning methods. The connection between the theoretical analysis and the pruning methods under study should be made more explicit.\\n\\nThe value of some of the main conclusions is also not clear. For instance, on line 309, the authors emphasize that \\\"even when data pruning methods have the same objective, the representations and selection algorithms used can result in drastically different subset selections.\\\" This is not necessarily surprising, as the data pruning methods do not *literally* have the same objective, they just fall into the same category defined by the authors of the present paper. An equally plausible explanation for this phenomenon is that the categories of objectives defined in the present paper do not meaningfully separate pruning objectives.\\n\\nFinally, while reproducibility or comparison studies of existing work *can* be quite valuable, they should be exhaustively thorough in order to merit publication at a top venue like ICLR. There were many other methods listed in the related work section which were not tested, and it is not clear if the conclusions drawn in the paper should extend to the many other methods available. In order to provide convincing evidence that the decomposition into representation + selection strategy and the classification of representations/selection strategies defined by the authors are meaningful, other related methods should be tested to verify that the qualitative claims made by the paper are generally applicable.\", \"questions\": \"What exactly is the meaning of \\\"training instances that are difficult for models to fit often contain fewer regularities\\\" (lines 162-163)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EOLBKobfd1 | Neural Circuit Architectural Priors for Quadruped Locomotion | [
"Nikhil X. Bhattasali",
"Venkatesh Pattabiraman",
"Lerrel Pinto",
"Grace W Lindsay"
] | Learning-based approaches to quadruped locomotion commonly adopt generic policy architectures like fully connected MLPs. As such architectures contain few inductive biases, it is in practice common to incorporate priors in the form of rewards, training curricula, imitation data, or trajectory generators. In nature, animals are born with priors in the form of their nervous system's architecture, which has been shaped by evolution to confer innate ability and efficient learning. For instance, a horse can walk within hours of birth and can quickly improve with practice. Such architectural priors can also be useful in ANN architectures for AI. In this work, we explore the advantages of a biologically inspired ANN architecture for quadruped locomotion based on neural circuits in the limbs and spinal cord of mammals. Our architecture achieves good innate performance and comparable final performance to MLPs, while using less data and orders of magnitude fewer parameters. Our architecture also exhibits better generalization to task variations, even admitting deployment on a physical robot without standard sim-to-real methods. This work shows that neural circuits can provide valuable architectural priors for locomotion and encourages future work in other sensorimotor skills. | [
"neuroscience",
"neural circuits",
"motor control"
] | Reject | https://openreview.net/pdf?id=EOLBKobfd1 | https://openreview.net/forum?id=EOLBKobfd1 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"nfbplzPzhG",
"jJfRaaMALw",
"hYBF0AhJE3",
"fTdqkrb2YT",
"aLeIekd8zT",
"QcnVq3mu8t"
],
"note_type": [
"meta_review",
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1735181748589,
1737523814400,
1730406863140,
1730430401989,
1730669848617,
1730719071313
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7067/Area_Chair_HFei"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7067/Reviewer_Sb2E"
],
[
"ICLR.cc/2025/Conference/Submission7067/Reviewer_LDgZ"
],
[
"ICLR.cc/2025/Conference/Submission7067/Reviewer_sjvx"
],
[
"ICLR.cc/2025/Conference/Submission7067/Reviewer_QxDf"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors provided a summary of contributions and reviewers' concerns.\\nThe authors raised concerns about reviewers R3 and R4. It might be more convincing to have another round of reviews with updated reviewers.\\n\\nAs mentioned by the authors, this paper is about a neural circuit model for the neuroscientific study of motor control, target to ICLR\\u2019s \\u201capplications to neuroscience\\u201d track.\\nThe authors presented MuJoCo simulation experiments to control the Unitree A1 robot without neuroscientific experiments.\\nR3 asked for baselines against SOTA robotics methods, which didn't satisfy the authors with a focus on applications to neuroscience. \\nThe AC didn't find any neuroscientific experimental results. It would be nice to show some (even very simple) neural recordings and behavior analysis during motor tasks, as a proof for its \\u201capplications to neuroscience\\u201d.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers R1 and R2 updated their scores to 6 after the discussion phase.\\nAC made the evaluation based on the final manuscript, discussions, opinions, and scores.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"The paper presents a biologically inspired neural architecture for quadruped locomotion, named Quadruped NCAP, designed to replicate mammalian neural circuits in the limbs and spinal cord. In contrast to traditional artificial neural networks (ANNs) like multilayer perceptrons (MLPs), which are widely used in robotic locomotion but lack inductive biases, Quadruped NCAP incorporates neural architectural priors, allowing it to operate with better data efficiency and computational resource savings. The study demonstrates that this architecture achieves comparable performance to MLPs while requiring fewer parameters and displaying enhanced generalization to varied terrains and task settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Biologically-Informed Structural Priors: Integrates neural circuit patterns, allowing the model to leverage structural priors that support data-efficient learning and adaptability across various terrains.\", \"comparison_with_traditional_anns\": \"Contrasts NCAP with conventional artificial neural networks like multilayer perceptrons (MLPs), highlighting that NCAP requires fewer parameters and less data, avoiding the need for large datasets and extensive computational resources.\", \"real_world_testing_and_robust_generalization\": \"Evaluates NCAP\\u2019s performance in simulations and on a physical robot, demonstrating effective generalization and adaptability without additional domain adaptation.\", \"weaknesses\": \"Hand-Tuned Parameters: The model\\u2019s Rhythm Generation (RG) module and Brainstem Command are set manually rather than learned, which could make it harder to adapt to new tasks or find the best settings automatically.\", \"fixed_speed_limitation\": \"The model currently supports only one set speed, meaning it cannot adjust its walking or running pace, which limits flexibility in different environments.\", \"limited_to_locomotion_tasks\": \"This study only tests NCAP on walking tasks, so it\\u2019s unclear how well this approach would work for other movements or types of tasks.\", \"questions\": \"How might the model\\u2019s learning stability or convergence be affected if RG and Brainstem parameters were learned rather than hand-tuned? Would automatic learning potentially introduce instability or require a specialized training regime?\\n\\nIf parameters were learned for one locomotion task, could they be transferred or fine-tuned for different tasks (e.g., varied speeds, uneven terrain)? How transferable are these parameters across different conditions?\\nDoes the current architecture allow for speed and gait adaptation on the fly, and if not, what modifications would be necessary to support dynamic speed changes in real time?\\n\\nWould automatically learning these parameters add significant computational overhead, and if so, how could this be mitigated? Are there any techniques that could allow for efficient online learning or adaptation of these parameters?\\n\\nCould the authors compare Quadruped NCAP with other models that incorporate different types of architectural priors (e.g., symmetry, task-based priors) to assess specific advantages of neural circuit-based priors?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This work is inspired by the four limb and spinal nerve circuits involved in mammal, and explores the advantages of ANN architecture in quadrupedal locomotion control. Compared to traditional neural networks such as MLP, neural circuits can provide valuable prior information for locomotion and achieve good performance with minimal data and parameters.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.Inspired by the ability of horses to stand and walk within a few hours of birth, a biologically inspired ANN architecture based on limbs and spinal cord neural circuits provides a novel perspective in the field of quadrupedal locomotion control.\\n2.Compared to traditional neural network architectures such as MLP, the model parameters of this research method are very small, and the simulation experiment results have achieved similar performance.\\n3.It has shown generalization performance in different tasks, even in situations that have not been encountered during training, such as different terrains and speeds, this research method can maintain relatively good performance.\", \"weaknesses\": \"1.The author mainly introduces the various components of the model in the method section, and the roles played by these components during the training and inference processes are not clearly explained.\\n2.Normally, using MLP as the neural network and online reinforcement learning algorithm PPO can control quadruped robots to walk on more complex terrains such as stairs. However, this research method was only tested on flat and bump terrains, and did not verify whether quadruped NAVP can achieve motion control of quadruped robots in more complex scenarios.\\n3. It has been demonstrated in the experiment that the prior knowledge of quadrupled NAVP is valuable. However, the PPO trained solely on speed tracking as a reward signal does not perform well and cannot prove its performance advantage compared to reinforcement learning.\", \"questions\": \"1. The article mentions manually adjusting the RG module and BC command, but what is the process of adjustment? What role did each module play during the training and deployment process?\\n2. The locomotion control of quadruped robots on multiple terrains is already quite mature, why not train models on more complex terrains? Is it because the generalization of the method is limited?\\n3. The online reinforcement learning algorithm PPO is relatively easy to implement for motion control of robots on flat ground and can be deployed to physical objects, but using only speed tracking rewards cannot train useful PPO algorithms. Although your experiment has proven the value of prior knowledge, what are the performance advantages of your method?Are there any issues with the subjects of the comparative experiment?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This research introduces \\\"Quadruped NCAP,\\\" a new way to make four-legged robots walk by copying how animal nervous systems work. Instead of using traditional artificial intelligence that needs lots of training data, this system copies the natural nerve circuits found in animal legs and spinal cords, using only 92 parameters (while traditional methods use nearly 80,000) yet works remarkably well. The system helps robots walk naturally and adapt to different surfaces, demonstrating that copying designs from nature can create better and more efficient robot control systems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Originality & Quality**\", \"First successful translation of mammalian neural circuits to robot control\", \"Dramatically efficient: uses only 92 parameters vs 79,372 in traditional methods\", \"Thoroughly tested in both simulation and real world, with open-source code\", \"**Clarity & Significance**\", \"Clear explanation and visualization of complex biological-to-AI translation\", \"Major practical impact: simpler system that works better in real world\", \"Opens new direction for bio-inspired AI, showing that copying nature's designs can make robot control more efficient and effective\"], \"weaknesses\": \"1. The theoretical contribution of NN structure is weak, lacking formal analysis of why this biological architecture works better than traditional ones.\\n2. The tasks (basic locomotion) are relatively simple compared to SOTA robotics challenges, making the parameter efficiency less impressive.\\n3. Unclear how well NCAP generalizes from simple to complex tasks, with limited analysis of the adaptation process.\\n4. Missing comparisons with other bio-inspired approaches like CPG and SNN.\\n5. No justification for why the chosen biological components are optimal for robot control versus other possible biological mechanisms.\\n6. Hand-tuned RG module could be hiding complexity that's just shifted from parameters to manual engineering.\\n7. Performance evaluation focuses on basic metrics (walking success).\\n8. No ablation studies showing which biological components are truly necessary for performance gains.\\n9. Missing analysis of computational cost and real-time performance requirements versus traditional approaches.\\n10. Limited exploration of how this approach could scale to more complex behaviors beyond basic locomotion.\", \"questions\": \"1. While the paper shows successful quadruped deployment, the novelty is questionable as the core neural architecture is heavily based on Swimmer NCAP, with limited theoretical justification for the adaptations made from simple to complex systems.\\n2. The paper lacks comparisons with other bio-inspired approaches (especially CPG-based methods which are standard in locomotion control).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presented a model for learning motion control of a quadruped robot. The model is bioinspired by incorporating a CPG and firing rate neurons. Experiments show the model can control a robot walking on flat ground well and the model has a higher parameter efficiency than typical MLP models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The model is a firing rate model modelled with differential equations, which suggests the model has internal dynamics. It could be the reason why it is more parameter-efficient than MLP on continuous control tasks. The experiments are not only conducted on a simulated robot but also on a real robot.\", \"weaknesses\": \"While this work is interesting, some of the contributions are overstated. The authors should be careful to claim the \\\"First neural circuit model for quadrupedal robot locomotion\\\", and investigate deeper into older papers from 10, 20, or even 30 years ago, there were papers that used complex CPGs to generate gaits for legged robot locomotion.\\n\\nBesides, the key contributions 2 and 3 stated on page 2 are not contributions but simply good practices when researching bioinspired models for robots. The texts from lines 124 to 132 read like contributions but need a better summary.\\n\\nThere is no comparison between the model with SOTA RL models.\\n\\nThe model is not presented very well mathematically. The corresponding section should be in the main text but not the appendix.\", \"questions\": \"1. How was the model trained? It is important but not clearly stated.\\n2. What is exactly the MLP for? There are many RL methods with MLP. The baseline is too vague.\\n3. Does the robot use any sensor except the joint sensors? Can it only walk straight? Can it change gaits?\\n4. What does the Brainsem command refer to? How does this part of the model work? Are there any reference papers for it? Are there any plots or examples for the command?\\n\\nThe score for the paper could either be up or down according to the responses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
EO8xpnW7aX | SymmetricDiffusers: Learning Discrete Diffusion on Finite Symmetric Groups | [
"Yongxing Zhang",
"Donglin Yang",
"Renjie Liao"
] | The group of permutations $S_n$, also known as the finite symmetric groups, are essential in fields such as combinatorics, physics, and chemistry. However, learning a probability distribution over $S_n$ poses significant challenges due to its intractable size and discrete nature. In this paper, we introduce *SymmetricDiffusers*, a novel discrete diffusion model that simplifies the task of learning a complicated distribution over $S_n$ by decomposing it into learning simpler transitions of the reverse diffusion using deep neural networks. We identify the riffle shuffle as an effective forward transition and provide empirical guidelines for selecting the diffusion length based on the theory of random walks on finite groups. Additionally, we propose a generalized Plackett-Luce (PL) distribution for the reverse transition, which is provably more expressive than the PL distribution. We further introduce a theoretically grounded "denoising schedule" to improve sampling and learning efficiency. Extensive experiments show that our model achieves state-of-the-art or comparable performance on solving tasks including sorting 4-digit MNIST images, jigsaw puzzles, and traveling salesman problems. Our code is released at <https://github.com/DSL-Lab/SymmetricDiffusers>. | [
"Finite Symmetric Groups",
"Discrete Diffusion",
"Permutations",
"Riffle Shuffles",
"Plackett-Luce Distribution",
"Sorting",
"Jigsaw Puzzle"
] | Accept (Oral) | https://openreview.net/pdf?id=EO8xpnW7aX | https://openreview.net/forum?id=EO8xpnW7aX | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zKtR8zeEdw",
"yzKS4jjbGg",
"tu05hfA6T0",
"rG9zeoeRxe",
"oCl7twbKrw",
"o2dU0zUumX",
"kozeZdZBHf",
"h5i8Xplh6A",
"gvnKwJS6JE",
"ZID08DMm1l",
"YnhKXSiiLN",
"XG1PVjeM6N",
"TdRT6zCdKD",
"TBhumpjk8Q",
"S3dXL2N6eR",
"S11Fi2n2FG",
"RisF6hmzNl",
"QSW04DIpW5",
"PLtIuNXBH8",
"NfrJJZJg19",
"N7oxQ3jTCS",
"L36fbgA7b2",
"9EMZ9q99gZ",
"4rK748I6hr",
"0nNEDkjmSt"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1732143234774,
1732510893213,
1732143484876,
1732143169113,
1729860931558,
1734630668350,
1732142522816,
1733200834709,
1732445757977,
1732599578725,
1737523719795,
1732142578088,
1733199778403,
1732143402307,
1732142374875,
1732609944090,
1732142414618,
1732279748951,
1732686662216,
1732599640023,
1731105168011,
1730476741540,
1732666588008,
1731075007095,
1730452455340
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_hmq7"
],
[
"ICLR.cc/2025/Conference/Submission5686/Area_Chair_3khD"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_iCzn"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_iCzn"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_nrB8"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_hmq7"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_iCzn"
],
[
"ICLR.cc/2025/Conference/Submission5686/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_H6zv"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_nrB8"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_7CmY"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_7CmY"
],
[
"ICLR.cc/2025/Conference/Submission5686/Reviewer_iCzn"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer iCzn (Part 2/2)\", \"comment\": \"> **Q3:** Note that\\u00a0$S_n\\\\subseteq [n]^n$. If the input data comprises of only permutations, then the network should learn to sample from a distribution whose samples are permutations and the standard framework of diffusion models applies.\\n> \\n> \\n> The authors also mention that representing a transition matrix over $S_n$ requires $n!\\\\times n!$ sized matrix. However, the authors themselves give a succinct description/representation of the forward transition matrix in the paper. The authors should elaborate why it is not possible to use this representation algorithmically.\\n> \\n\\n**A3:** It is true that we can view each permutation in $S_n$ as a sequence in $[n]^n$. However, standard diffusion models assume token-wise conditional independence when modeling the reverse transition distributions. This assumption does not hold in learning permutations since different components of $X_{t-1}$ are **not** **independent** conditioned on $X_t$ in the reverse process, and they have to satisfy the constraint of permutations (i.e., being one of the vertices of Birkhoff Polytope). Therefore, if we have a distribution over $[n]^n$, the denoising step in standard diffusion models would lead to noisy data $X_{t-1}$ that is not an exact permutation. Furthermore, it is also computationally expensive to project it to a distribution over $S_n$. \\n\\nDiscrete diffusion methods like D3PM [4], which model categorical distributions, are also unsuitable for $S_n$. These methods require explicit matrix multiplications involving $n!\\\\times n!$ transition matrices. While D3PM uses dense transition matrices such as uniform or discretized Gaussian distributions, performing dense matrix multiplications at this scale is impractical. \\n\\nOur proposed method addresses these challenges by defining efficient, customized transition distributions through card-shuffling methods. This approach avoids explicit matrix multiplications by directly simulating the forward process using the efficient operations of shuffling methods. Essentially, the shuffling methods induce \\u201csparse\\u201d transitions on $S_n$, resolving the efficiency issues inherent in existing discrete diffusion models. As our framework is fundamentally different and existing frameworks are infeasible for $S_n$, our baselines focus on comparing different shuffling methods within our framework.\\n\\nThanks for pointing out the related references [2,3]. We have acknowledged their contributions in the updated paper **(lines 84\\u201386 on page 2, marked in blue)**. Work [2] extends D3PM with a continuous-time Markov chain approach but still involves costly computations tied to the transition distribution, making it challenging to apply directly to $S_n$. Work [3] models a diffusion process on sequences in $\\\\mathcal{X}^L$, changing one index of the sequence at each forward step. \\n\\nApplying [3] to $S_n$ would involve modeling sequences in $[n]^n$ using [3], with $X_0$ as the data distribution of the permutations. While Glauber dynamics in [3] avoids the conditional independence issue mentioned earlier, one caveat is that $X_t$ for $t\\\\geq 1$ would most likely lie outside $S_n$. In the reverse process, if learning is imperfect, the final sampled sequence may not be a permutation, necessitating projection onto $S_n$, which is again a non-trivial task. Currently, the code for [3] is not publicly available. We find their approach intriguing and look forward to experimenting with GGM once the code is released.\\n\\n> **Q4:** In proposition 1, should it be changed to \\\"the GPL distribution can represent a delta distributions in the limit\\\" instead of \\\"exactly\\\"?\\n> \\n\\n**A4:** You are correct since we are using $-\\\\infty$ for the logits. We have reorganized the expressiveness results in the newly uploaded version of the paper. The original Proposition 1 is now separated into Proposition 1 **(colored blue)** in the main paper and Lemma 4 in Appendix E. The expressiveness result for GPL is stated as Theorem 2 **(colored blue)** in the main paper and proved in Appendix E.\\n\\n## References\\n\\n[1] Generating a random permutation with random transpositions by Diaconis and Shahshahani\\n\\n[2] Simplified and Generalized Masked Diffusion for Discrete Data by Shi et al.\\n\\n[3] Glauber Generative Model: Discrete Diffusion Models via Binary Classification by Varma et al.\\n\\n[4] Austin et al. \\\"Structured denoising diffusion models in discrete state-spaces.\\\" Advances in Neural Information Processing Systems 34 (2021): 17981-17993.\"}",
"{\"title\": \"Thank you and follow-up\", \"comment\": \"Thank you for your reply and positive feedback! We appreciate your time in reviewing our paper and responses!\\n\\nYou are correct that we can use a Monte Carlo estimation of the entropy $S(q\\\\_{\\\\theta}(X\\\\_{t-1}|X\\\\_t))=-\\\\mathbb{E}\\\\_{X\\\\_{t-1}\\\\sim q\\\\_{\\\\theta}(X\\\\_{t-1}|X\\\\_t)}\\\\big[\\\\log q\\\\_{\\\\theta}(X\\\\_{t-1}|X\\\\_t)\\\\big]$, and the log likelihood $\\\\log q_{\\\\theta}(X_{t-1}|X_t)$ for PL or GPL is analytically available. Our original concern was that in Eq.(6) of [1], the term involving the entropy is $\\\\mathbb{E}\\\\_{X_{T:t}\\\\sim q\\\\_{\\\\theta}(X\\\\_{T:t})}\\\\big[S(q\\\\_{\\\\theta}(X\\\\_{t-1}|X\\\\_t))\\\\big]$, which already requires a Monte Carlo estimation when using REINFORCE to estimate the gradient. This means we have to use MC estimation twice, so there may be a very high variance.\\n\\nYou bring up a good point that [1] also requires the forward transition $q(X_t|X_{t-1})$ to be analytically available, and it is true that most shuffling methods don\\u2019t admit analytical forms of $q(X_t|X_{t-1})$. However, for riffle shuffles, $q(X_t|X_{t-1})$ is actually available, which is briefly mentioned in Appendix C.1 line 774-775 of our paper. So despite the concern with high variance, it would be an interesting experiment to try the method in [1] using riffle shuffles.\\n\\nWe now have some preliminary results on further experiments including the ImageNet jigsaw puzzle and TSP-50 experiments. For the ImageNet jigsaw puzzle experiments, due to time constraints, we ran on only the first 20 classes of the ImageNet dataset, and we currently only have results for $2\\\\times 2$ and $3\\\\times 3$. In the table below, the first two rows are the results of our method, and the last two rows are results from the Gumbel-Sinkhorn network. It is clear that our model outperforms the Gumbel-Sinkhorn network.\\n\\n| | | **Kendall-Tau $\\\\uparrow$** | **Accuracy (%)** | **Correct (%)** | **RMSE** $\\\\downarrow$ | **MAE** $\\\\downarrow$ |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| **SymmetricDiffusers (Ours)** | **$\\\\mathbf{2\\\\times 2}$** | **0.8627** | **83.50** | **89.88** | 0.1806 | **0.0413** |\\n| | **$\\\\mathbf{3\\\\times 3}$** | **0.7451** | **57.50** | **79.31** | **0.2245** | **0.0687** |\\n| | | | | | | |\\n| **Gumbel-Sinkhorn Network** | **$\\\\mathbf{2\\\\times 2}$** | 0.8212 | 78.36 | 86.26 | **0.1583** | 0.0687 |\\n| | **$\\\\mathbf{3\\\\times 3}$** | 0.5667 | 19.10 | 55.83 | 0.3388 | 0.1636 |\\n\\nFor the TSP-50 experiment, in the table below, Concorde and 2-Opt are OR solvers, and GCN, DIFUSCO, and Ours are learning-based models. Due to time constraints, we trained on only 1/3 of the training set (500K graphs), and the numbers for 2-Opt, GCN, and DIFUSCO are directly copied from the DIFUSCO paper [2]. The decoding heuristics used for GCN and DIFUSCO are greedy. The decoding heuristics used for our method are beam search and picking the tour with the shortest length in the final beam. Although our current results do not surpass the state-of-the-art, they are still comparable and demonstrate significant promise. As our method is not specifically tailored for TSPs, further hyperparameter tuning and architectural tweaking are required. Importantly, such modifications would not impact our core contribution, i.e., the discrete diffusion framework over finite symmetric groups. We plan to continue exploring this direction and will provide updated performance results on large-scale TSPs in a future version of our paper.\\n\\n| **Method** | **Concorde** | **2-Opt** | **GCN** | **DIFUSCO** | **Ours** |\\n| --- | --- | --- | --- | --- | --- |\\n| **Tour Length $\\\\downarrow$** | **5.69** | 5.86 | 5.87 | **5.70** | 5.86 |\\n| **Optimality Gap (%) $\\\\downarrow$** | **0.00** | 2.95 | 3.10 | **0.10** | 2.94 |\\n\\n### References\\n\\n[1] Sanokowski, Sebastian, Sepp Hochreiter, and Sebastian Lehner. \\\"A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization.\\\" Forty-First International Conference on Machine Learning.\\n\\n[2] Zhiqing Sun and Yiming Yang. Difusco: Graph-based diffusion solvers for combinatorial optimization, 2023.\"}",
"{\"title\": \"Response to Reviewer hmq7 (Part 2/2)\", \"comment\": \"> **Q3:** Why are experiments limited to TSP-20 instances rather than including larger TSP instances? TSP-20 is considered a very small problem size. This experiment is the most interesting to me and it would be interesting to see how this method performs on larger TSP instances. Can you provide an additional comparison on TSP-100?\\n> \\n\\n**A3:** We are running larger-scale TSP experiments like TSP-50 and TSP-100 right now, but it takes much longer to train the models compared to other experiments. Since there is only limited time for the rebuttal period, we will post a follow-up response if our experiments finish before the discussion deadline.\\n\\nWe would like to point out that the current literature on permutation learning has never done experiments on TSPs. As a general permutation learning model, our model is certainly different from models specifically designed for the TSP. The TSP is just one application, and our model has been verified to be much more effective than previous methods on other tasks like the jigsaw puzzle and sorting 4-digit MNIST numbers. Previous work, such as the baseline methods in our experiments, can only learn permutations up to a small sequence length. For example, for the sort 4-digit MNIST numbers experiment, previous methods are only effective for sequences of length up to $32$, while our method has promising results for sequence lengths up to $200$ and outperforms the baseline methods significantly under longer sequence lengths. Our method already provide a substantial improvement over previous methods in the field of permutation learning.\\n\\n> **Q4:** Can your method be combined with the framework proposed in [1] to solve TSP using diffusion models without requiring data from classical solvers?\\n> \\n\\n**A4:** Thanks for pointing out the reference! The framework proposed in [1] offers a compelling approach to unsupervised neural combinatorial optimization. In particular, [1] uses an energy $H$ as signals to the quality of a solution, and use a tractable Joint Variational Upper Bound of the commonly used reverse KL divergence to bypass the need for exact likelihood evaluations. Diffusion models can be naturally incorporated into the Joint Variational Upper Bound. Specifically, our method can certainly be combined with the framework in [1] as long as we pick a reverse transition such that the Shannon entropy $S(q_{\\\\theta}(X_{t-1}|X_t))$ in Eq.(6) of [1] is tractable. Solving the TSP without requiring data from classical solvers is definitely an exiciting research direction, and combining our method and [1] could be an interesting future work. **We have discussed [1] in the newly uploaded version of our paper (line 517 on page 10, colored blue).**\\n\\n## References\\n\\n[1] Sanokowski, Sebastian, Sepp Hochreiter, and Sebastian Lehner. \\\"A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization.\\\" Forty-First International Conference on Machine Learning.\\n\\n[2] Austin et al. \\\"Structured denoising diffusion models in discrete state-spaces.\\\" Advances in Neural Information Processing Systems 34 (2021): 17981-17993.\"}",
"{\"title\": \"Response to Reviewer iCzn (Part 1/2)\", \"comment\": \"Thank you for the insightful and constructive comments. We appreciate your positive feedback and address the questions below.\\n\\n> **Q1:** Experiments are very small scale, comprising of sorting 4 digit MNIST, solving 20 node TSPs and solving jigsaw puzzles of CIFAR-10 data.\\n> \\n\\n**A1:** We acknowledge the concern about the experiment scale. However, scalability remains a highly challenging aspect in permutation learning. Previous methods in the literature are generally limited to smaller permutation lengths. For example, for the sort 4-digit MNIST numbers experiment, previous methods are only effective for sequences of length up to $32$, while our method has promising results for sequence lengths up to $200$ and beats the baseline methods significantly in these longer sequence lengths. Furthermore, none of the previous permutation learning methods have tackled TSP experiments. While our model is not specifically designed for TSP tasks, it represents a substantial improvement over existing general-purpose permutation learning methods.\\n\\nTo test our framework on more complicated and larger tasks, we are conducting additional experiments, including solving jigsaw puzzles on ImageNet and larger-scale TSP tasks (e.g., TSP-50). However, these experiments require extensive hyperparameter tuning and significantly longer training times, particularly for larger TSP instances. Due to the limited time during the rebuttal period, we will provide a follow-up response if these experiments complete before the discussion deadline.\\n\\n> **Q2:** The reverse process for random transposition is not very expressive. Suppose the reverse transposition is $(1,2)$ with probability\\u00a0$0.5$\\u00a0and\\u00a0$(2,3)$ with probability\\u00a0$0.5$. This simple distribution cannot be expressed using the model.\\n> \\n\\n**A2:** It is true that the inverse transposition model cannot represent the specific distribution you described. However, inverse transposition is only one of the many reverse process methods we proposed. Importantly, we demonstrate that the GPL distribution, which is the best-performer in our experiments, can approximate your distribution to any precision. The distribution you mentioned on $S_3$ is\\n\\n$$\\np\\\\\\\\left(\\\\\\\\begin{pmatrix}1&2&3\\\\\\\\\\\\\\\\2&1&3\\\\\\\\end{pmatrix}\\\\\\\\right)=p\\\\\\\\left(\\\\\\\\begin{pmatrix}1&2&3\\\\\\\\\\\\\\\\1&3&2\\\\\\\\end{pmatrix}\\\\\\\\right)=\\\\\\\\frac12.\\n$$\\n\\nConsider the following score parameters for GPL\\n\\n$$\\nS=\\\\\\\\begin{bmatrix}0&0&-\\\\\\\\infty\\\\\\\\\\\\\\\\0&-\\\\\\\\infty&-C\\\\\\\\\\\\\\\\-\\\\\\\\infty&0&0\\\\\\\\end{bmatrix},\\n$$\\n\\nwhere $C$ is some arbitrary large positive number. Then we have\\n\\n$$\\n\\\\\\\\begin{align*}\\n\\\\\\\\text{GPL}\\\\_S \\\\\\\\left( \\\\\\\\begin{pmatrix} 1&2&3 \\\\\\\\\\\\\\\\ 2&1&3 \\\\\\\\end{pmatrix} \\\\\\\\right) &= \\\\\\\\frac{\\\\\\\\exp(s_{12})}{\\\\\\\\exp(s_{11})+\\\\\\\\exp(s_{12})+\\\\\\\\exp(s_{13})}\\\\\\\\cdot\\\\\\\\frac{\\\\\\\\exp(s_{21})}{\\\\\\\\exp(s_{21})+\\\\\\\\exp(s_{23})}\\\\\\\\cdot\\\\\\\\frac{\\\\\\\\exp(s_{33})}{\\\\\\\\exp(s_{33})} \\\\\\\\\\\\\\\\\\n&=\\\\\\\\frac{1}{1+1}\\\\\\\\cdot\\\\\\\\frac{1}{1+\\\\\\\\exp(-C)}\\\\\\\\to\\\\\\\\frac12\\n\\\\\\\\end{align*}\\n$$\\n\\nas $C\\\\\\\\to\\\\\\\\infty$. We also have\\n\\n$$\\n\\\\\\\\begin{align*}\\n\\\\\\\\text{GPL}\\\\_S\\\\\\\\left(\\\\\\\\begin{pmatrix}1&2&3\\\\\\\\\\\\\\\\1&3&2\\\\\\\\end{pmatrix}\\\\\\\\right)&=\\\\\\\\frac{\\\\\\\\exp(s_{11})}{\\\\\\\\exp(s_{11})+\\\\\\\\exp(s_{12})+\\\\\\\\exp(s_{13})}\\\\\\\\cdot\\\\\\\\frac{\\\\\\\\exp(s_{23})}{\\\\\\\\exp(s_{22})+\\\\\\\\exp(s_{23})}\\\\\\\\cdot\\\\\\\\frac{\\\\\\\\exp(s_{32})}{\\\\\\\\exp(s_{32})} \\\\\\\\\\\\\\\\\\n&=\\\\\\\\frac{1}{1+1}\\\\\\\\cdot\\\\\\\\frac{\\\\\\\\exp(-C)}{\\\\\\\\exp(-C)}\\\\\\\\cdot 1=\\\\\\\\frac12.\\n\\\\\\\\end{align*}\\n$$\\n\\nIn fact, during the rebuttal period, we proved that the reverse process using GPL is expressive enough to represent **any** distribution over $S_n$, which is a significant result regarding the expressiveness of the reverse process. Please refer to Theorem 2 **(colored blue)** and Appendix E of the newly uploaded paper. We also provide an example illustrating the idea we used in the proof in Figure 3 and lines 895 to 904 in Appendix E.\\n\\nFinally, for the empirical performance of these different reverse methods, please refer to the ablation studies (Table 3 and 5) in the paper.\"}",
"{\"summary\": \"This authors proposes a diffusion framework that learns a distribution over permutations.\\nThey propose a set of different reverse and forward diffusion processes to realize this goal.\\nThey validate their approach on various different experiments such as TSP, four-digit MNIST, CIFAR and noisy MNIST.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents an interesting approach with mostly clear writing\\n2. Several innovative reverse and forward processes are proposed\", \"weaknesses\": \"1. The claim of state-of-the-art performance on TSP-20 appears too bold given the limited scope\\n2. Several experimental details lack sufficient clarity (see Questions)\", \"questions\": \"1. How do the proposed reverse and forward processes compare to naive discrete denoising diffusion models?\\nWhat is the difference between your method and other differentiable sorting baselines in Tab. 1?\\n\\n2. What is the sequence length $n$ in the four-digit MNIST dataset?\\n\\n3. Why are experiments limited to TSP-20 instances rather than including larger TSP instances? TSP-20 is considered a very small problem size. This experiment is the most interesting to me and it would be interesting to see how this method performs on larger TSP instances. Can you provide an additional comparison on TSP-100?\\n\\n4. Can your method be combined with the framework proposed in [1] to solve TSP using diffusion models without requiring data from classical solvers?\\n\\n\\n\\n## References\\n[1] Sanokowski, Sebastian, Sepp Hochreiter, and Sebastian Lehner. \\\"A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization.\\\" Forty-First International Conference on Machine Learning.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper develop techniques to learn discrete diffusion models over the group of permutations.\", \"the_reviewers_and_i_unanimously_appreciated\": [\"The novelty of the approach: Learning a diffusion model to sample on the set of permutation.\", \"The clarity of the writing.\", \"The quality of the contribution: the technical contributions are sound and relevant to the work.\", \"The only weakness of the paper is that the scalability of the method is not fully explored (in particular for large values of n).\"], \"additional_comments_on_reviewer_discussion\": \"Reviewers unanimously agreed that this paper should be accepted.\\n\\nI would not mind if this paper 'only' gets a spotlight or a poster but I am strongly recommending this paper to be accepted.\"}",
"{\"title\": \"Response to Reviewer 7CmY\", \"comment\": \"Thank you for the insightful and constructive comments. We appreciate your positive feedback and address the questions below.\\n\\n> **Q1:** Why machine learning applications require generative models over permutations in the first place, rather than just outputting a single permutation, if ultimately only a single learned permutation is required per input?\\n> \\n\\n**A1:** As in many other prediction tasks, predicting a distribution is often more useful than predicting a single output. In particular, in our context, there are several advantages.\\n\\nFirst, there are many tasks where the optimal permutation is not unique. For example, in the jigsaw puzzle experiments, there may be identical patches in an image, in which case there would be multiple permutations that can recover the image. We added noise to the MNIST dataset to disambiguate the patches purely to simplify the evaluation metric computations, and our framework is certainly able to solve jigsaw puzzles with identical patches. Another example is the TSP, where multiple permutations may exist that result in the same minimum tour length. \\n\\nSecond, many NP-hard problems exist, e.g., TSPs and graph isomorphism problems, where the optimal solution is hard to find. Having a distribution over $S_n$ is extremely useful in constructing probabilistic approximate algorithms, e.g., MCMC-based search methods.\\n\\nLast but not least, learning a distribution over $S_n$ allows learning to sample permutations, which further enables learning to generate other discrete objects. For instance, when generating expander graphs, one of the key steps in the probabilistic construction is to generate random permutations of vertices, cf., [1]. \\n\\n[1] Friedman, J., 2003, June. A proof of Alon's second eigenvalue conjecture. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing (pp. 720-724).\\n\\n> **Q2:** For the right choice of parameters, can the reverse processes actually represent the exactly correct distributions induced by the corresponding forward diffusion process?\\n> \\n\\n**A2:** Yes! During the rebuttal period, we proved that the reverse process using GPL can represent **any** distribution over $S_n$, which is a significant result regarding the expressiveness of the reverse process. Please refer to Theorem 2 **(colored blue)** and Appendix E of the newly uploaded paper. We also provide an example illustrating the idea we used in the proof in Figure 3 and lines 895 to 904 in Appendix E.\\n\\n> **Q3:** The abstract claims to learn a distribution over $S_n$, but the concrete objects that are dealt with are ordered sets of objects (stored in an $n$ by $d$ matrix). Would it be accurate to refer to this method as *conditional* diffusion? If not, how could the architectures best be modified to output a distribution over raw permutation matrices?\\n> \\n\\n**A3:** For a set of distinct objects/components, there is a one-to-one correspondence between all ordered sequences of the components and $S_n$. If there are identical components, then we can apply arbitrary tie-breaking rules and still obtain the one-to-one correspondence. Therefore, dealing with concrete objects is equivalent to learning a distribution over $S_n$. In our experiments, we are using conditional diffusions in the sense that we are conditioning on the set (i.e. the unordered collection) of components. We do not need any modifications to output a distribution over the raw permutations because of this one-to-one correspondence.\\n\\n> **Q4:** As noted on line 154,\\u00a0$\\\\mathcal{S}$ does not change across steps \\u2014 why enforce this for diffusion models? Does this make something easier? Is it potentially restrictive in terms of what distributions can be represented after a given number of steps?\\n> \\n\\n**A4:** We let $\\\\mathcal{S}$ to remain the same across steps to construct a time-homogeneous Markov chain. The first reason is that existing random walk theories on finite groups primarily deal with time-homogeneous Markov chains. If we allow $\\\\mathcal{S}$ to change across steps, the convergence analysis may become very difficult in general. The second reason is that having $\\\\mathcal{S}$ to change across time-steps may not bring any additional benefits. For example, we could mix riffle shuffles, random transpositions, and random insertions together in the forward process. However, we know that riffle shuffles mix the fastest, so having other shuffling methods will only slow down the mixing time. Experiments also show that the riffle shuffle alone is very effective with fast mixing time.\"}",
"{\"comment\": \"Thank you very much. This addresses all my concerns. I will increase the score to 8.\"}",
"{\"comment\": \"Thank you for the response. I appreciate the proof regarding the GPL distribution and regarding the scale of the experiments. However, I have some further concerns with response A3. The authors quote that the original D3PM work assumed factored denoising and thus the denoising trajectory cannot be ensured to be a permutation.\\n\\n1. While this is true, there are many other proposals for diffusion models which do not assume factorization. For instance SEDD ([1a]) does not assume factorization of the reverse distribution and would be a valid baseline for the current work. \\n\\n2. When modeling distributions over $[n]^n$ it is not necessary that the entire trajectory is restricted to $S_n$. It is sufficient if the algorithm outputs a permutation at time $0$. \\n\\n3. I also want to point out that D3PM style models have been shown to be capable of planning (such as generating SuDoKus, Solving SAT problems), where factorization certainly does not hold and there are lots of structure and constraints in the output. I refer to [2a], [3a] and references in the paper. Even language modeling and image generation involve structure and constraints, where factorization does not hold, yet D3PM based models have been successful. \\n \\n4. Regarding Glauber Generative model [3], I see that their algorithm has been clearly described in the paper and can be implemented in a straightforward manner. \\n\\nI am not satisfied with the baselines used in this work and the absence of other discrete diffusion based approaches. \\n\\n[1a] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution\\n\\n[2a] BEYOND AUTOREGRESSION: DISCRETE DIFFUSION FOR COMPLEX REASONING AND PLANNING\\n\\n[3a] LayoutDM: Discrete Diffusion Model for Controllable Layout Generation\"}",
"{\"title\": \"Further Response Part 1/2: Experiments on ImageNet Jigsaw Puzzle and TSP-50\", \"comment\": \"Thank you for your reply. We would first like to share some preliminary results on the ImageNet jigsaw puzzle and TSP-50 experiments. For the ImageNet jigsaw puzzle experiments, due to time constraints, we ran on only the first 20 classes of the ImageNet dataset, and we currently only have results for $2\\\\times 2$ and $3\\\\times 3$. In the table below, the first two rows are the results of our method, and the last two rows are results from the Gumbel-Sinkhorn network. It is clear that our model outperforms the Gumbel-Sinkhorn network.\\n\\n| | | **Kendall-Tau $\\\\uparrow$** | **Accuracy (%)** | **Correct (%)** | **RMSE** $\\\\downarrow$ | **MAE** $\\\\downarrow$ |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| **SymmetricDiffusers (Ours)** | **$\\\\mathbf{2\\\\times 2}$** | **0.8627** | **83.50** | **89.88** | 0.1806 | **0.0413** |\\n| | **$\\\\mathbf{3\\\\times 3}$** | **0.7451** | **57.50** | **79.31** | **0.2245** | **0.0687** |\\n| | | | | | | |\\n| **Gumbel-Sinkhorn Network** | **$\\\\mathbf{2\\\\times 2}$** | 0.8212 | 78.36 | 86.26 | **0.1583** | 0.0687 |\\n| | **$\\\\mathbf{3\\\\times 3}$** | 0.5667 | 19.10 | 55.83 | 0.3388 | 0.1636 |\\n\\nFor the TSP-50 experiment, Concorde and 2-Opt are OR solvers, and GCN, DIFUSCO, and Ours are learning-based models. Due to time constraints, we trained on only 1/3 of the training set (500K graphs), and the numbers for 2-Opt, GCN, and DIFUSCO are directly copied from the DIFUSCO paper [1]. The decoding heuristics used for GCN and DIFUSCO are greedy. The decoding heuristics used for our method are beam search and picking the tour with the shortest length in the final beam. Although our current results do not surpass the state-of-the-art, they are still comparable and demonstrate significant promise. As our method is not specifically tailored for TSPs, further hyperparameter tuning and architectural tweaking are required. Importantly, such modifications would not impact our core contribution, i.e., the discrete diffusion framework over finite symmetric groups. We plan to continue exploring this direction and will provide updated performance results on large-scale TSPs in a future version of our paper.\\n\\n| **Method** | **Concorde** | **2-Opt** | **GCN** | **DIFUSCO** | **Ours** |\\n| --- | --- | --- | --- | --- | --- |\\n| **Tour Length $\\\\downarrow$** | **5.69** | 5.86 | 5.87 | **5.70** | 5.86 |\\n| **Optimality Gap (%) $\\\\downarrow$** | **0.00** | 2.95 | 3.10 | **0.10** | 2.94 |\\n\\n\\n### References\\n\\n[1] Zhiqing Sun and Yiming Yang. Difusco: Graph-based diffusion solvers for combinatorial optimization, 2023.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}",
"{\"title\": \"Response to Reviewer nrB8\", \"comment\": \"Thank you for the insightful and constructive comments. We appreciate your positive feedback! We have corrected the typos in the newly uploaded version of the paper.\\n\\n> **Q1:** Have the authors considered evaluating OOD performance (e.g., feeding colored, or otherwise font-shifted MNIST into a model that was trained on grayscale images)? Do they anticipate a drop in performance in that setting?\\n> \\n\\n**A1:** Evaluating the OOD performance of our model is indeed an interesting and important direction, and we would anticipate a performance drop in such settings. Feeding colored or font-shifted MNIST into a model that was trained on grayscale images would primarily test the OOD performance of the underlying vision encoder (e.g., the CNN) of SymmetricDiffuser. Testing the generalization ability with varying sequence lengths would also be interesting. For example, we could train on short-length sequences and test on long sequences. However, the focus of this paper is on proposing a novel discrete diffusion framework for symmetric groups, and enhancing OOD performance falls outside the scope of our current study. Addressing OOD performance would likely require significant modifications to the loss function or model design, making it a promising topic for future research.\"}",
"{\"title\": \"Experiments Follow-up\", \"comment\": \"Thank you for your response, and we definitely agree that intuitions should be backed up by experiments. We have tested our model against SEDD, which is currently one of the strongest discrete diffusion models. To clearly compare our method and SEDD, we set up a simple experiment for the models to learn a delta distribution over $S_n$ (i.e. a single permutation) for $n=100$ and $200$. In particular, we let the models learn the identity permutation and a fixed arbitrary distribution.\\n\\nFor SEDD, we view a permutation as a sequence of length $n$, where each number of the sequence is from $\\\\\\\\{0,1,\\\\ldots,n-1\\\\\\\\}$. While the sequence at time 0 is a permutation, the trajectory may fall outside of $S_n$ for SEDD. We use the uniform transition for the forward process following the original work. For the reverse process of SEDD, we start by sampling a random sequence of length $n$. \\n\\nFor our method, we use riffle shuffle as the forward process and the GPL distribution as the reverse process. The entire trajectory is restricted in $S_n$. At the start of the reverse process, we sample a permutation from $S_n$ uniformly at random. \\n\\nThe SEDD model we use in our experiments has about 25M parameters, while our model only has about 2M parameters. For $n=100$, we use a batch size of 512 and 30K training steps on both methods. For $n=200$, we use a batch size of 128 and 30K training steps on both methods. For performance evaluation, we randomly sample 2560 sequences for SEDD and 2560 permutations for our method, and we perform their respective decoding processes.\\n\\nThe results are detailed in the following three tables with experiment setting stated in the header.\\n\\n| **The Identity Permutation, $n=100$** | **Accuracy (%)** | **Correct (%)** |\\n| --- | --- | --- |\\n| **SymmetricDiffuser (Ours)** | **100** | **100** |\\n| **SEDD** | 95.47 | 99.95 |\\n\\n| **Fixed Arbitrary Permutation,** $n=100$ | **Accuracy (%)** | **Correct (%)** |\\n| --- | --- | --- |\\n| **SymmetricDiffuser (Ours)** | **100** | **100** |\\n| **SEDD** | 93.24 | 99.93 |\\n\\n| **Fixed Arbitrary Permutation,** $n=200$ | **Accuracy (%)** | **Correct (%)** |\\n| --- | --- | --- |\\n| **SymmetricDiffuser (Ours)** | **100** | **100** |\\n| **SEDD** | 88.75 | 99.94 |\\n\\nOur method reaches 100% accuracy in all experiments. While the accuracies of SEDD are also high, there are still notable gaps between SEDD and our method, particularly as the sequence length increases. We also observe that SEDD makes mistakes exactly because some of the samples are not permutations. These experiments demonstrate that by restricting the trajectory to $S_n$ and leveraging the structures of $S_n$, our method is more effective than previous discrete diffusion models that rely on transitions in the larger sequence spaces. We plan to include these experiments in a future version of our paper. We also plan to conduct a more comprehensive analysis in the future, including tasks like learning mixture distributions.\\n\\nWe would also like to highlight that it is **nearly impossible** for other discrete diffusion models (including SEDD) to solve the tasks introduced in our paper, including the jigsaw puzzle, sorting multi-digit MNIST numbers, and the TSP. The reason is that all prior discrete diffusion models assume a **fixed** alphabet or vocabulary, and they model categorical distributions on the fixed alphabet. For example, in NLP tasks, the vocabulary is predefined and fixed. However, the alphabet is the set of all possible image patches for image tasks such as jigsaw puzzles and sorting MNIST numbers. It is impractical to gather the complete alphabet beforehand. We could potentially train VQVAEs to obtain quantized image embeddings. However, such an approach introduces the approximation error from the quantized alphabet. For the TSP, each node of the graph is a point in the continuous space $\\\\mathbb{R}^2$, so it is also impossible to gather the complete alphabet. In contrast, our method can be successfully applied to these tasks because we model a distribution on the fixed alphabet $S_n$, and we treat each permutation as a function that can be applied to an ordered list of objects.\\n\\nFinally, we apologize for the delay in our response, as it took us additional time to modify the code of SEDD. We hope that our response addresses your concerns thoroughly.\"}",
"{\"title\": \"Response to Reviewer hmq7 (Part 1/2)\", \"comment\": \"Thank you for the insightful and constructive comments. We appreciate your positive feedback and address the questions below.\\n\\n> **Q1:** How do the proposed reverse and forward processes compare to naive discrete denoising diffusion models? What is the difference between your method and other differentiable sorting baselines in Tab. 1?\\n> \\n\\n**A1:** Existing discrete diffusion models assume token-wise conditional independence when modeling the reverse transition distributions. This assumption does not hold in learning permutations since different components of $X_{t-1}$ are **not** **independent** conditioned on $X_t$ in the reverse process, and they have to satisfy the constraint of permutations (i.e., being one of the vertices of Birkhoff Polytope). Therefore, if we have a distribution over $[n]^n$, the denoising step in standard diffusion models would lead to noisy data $X_{t-1}$ that is not an exact permutation. Furthermore, it is also computationally expensive to project it to a distribution over $S_n$. \\n\\nDiscrete diffusion methods like D3PM [2], which model categorical distributions, are also unsuitable for $S_n$. These methods require explicit matrix multiplications involving $n!\\\\times n!$ transition matrices. While D3PM uses dense transition matrices such as uniform or discretized Gaussian distributions, performing dense matrix multiplications at this scale is impractical. \\n\\nOur proposed method addresses these challenges by defining efficient, customized transition distributions through card-shuffling methods. This approach avoids explicit matrix multiplications by directly simulating the forward process using the efficient operations of shuffling methods. Essentially, the shuffling methods induce \\u201csparse\\u201d transitions on $S_n$, resolving the efficiency issues inherent in existing discrete diffusion models. As our framework is fundamentally different and existing frameworks are infeasible for $S_n$, our baselines focus on comparing different shuffling methods within our framework.\\n\\nThe differentiable sorting baselines in Table 1 represent the predominant approach in the literature for learning permutations. They define differentiable approximations to sorting operations, enabling optimization over permutations. In contrast, our method uses discrete diffusion models to learn a distribution over $S_n$ via forward noising and reverse denoising processes\\u2014a fundamentally different paradigm from differentiable sorting. Additionally, our method significantly outperforms differentiable sorting methods on longer sequence lengths, marking a substantial improvement in permutation learning.\\n\\n> **Q2:** What is the sequence length\\u00a0$n$\\u00a0in the four-digit MNIST dataset?\\n> \\n\\n**A2:** The sequence length $n$ is the number of four-digit MNIST numbers that we are sorting, which is also the $n$ in $S_n$. We have clarified this on line 423 **(colored blue)** of the newly uploaded paper.\\n\\n## References\\n\\n[1] Sanokowski, Sebastian, Sepp Hochreiter, and Sebastian Lehner. \\\"A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization.\\\" Forty-First International Conference on Machine Learning.\\n\\n[2] Austin et al. \\\"Structured denoising diffusion models in discrete state-spaces.\\\" Advances in Neural Information Processing Systems 34 (2021): 17981-17993.\"}",
"{\"title\": \"Response to Reviewer H6zv (Part 1/2)\", \"comment\": \"Thank you for the insightful and constructive comments. We appreciate your positive feedback and address the questions below.\\n\\n> **Q1:** Could you provide additional comparisons between symmetric diffusers and other discrete diffusion models, such as discrete denoising diffusion probabilistic models (D3PMs), particularly regarding tasks that involve smaller permutation spaces?\\n> \\n\\n**A1:** Existing discrete diffusion models assume token-wise conditional independence when modeling the reverse transition distributions. This assumption does not hold in learning permutations since different components of $X_{t-1}$ are **not** **independent** conditioned on $X_t$ in the reverse process, and they have to satisfy the constraint of permutations (i.e., being one of the vertices of Birkhoff Polytope). Therefore, if we have a distribution over $[n]^n$, the denoising step in standard diffusion models would lead to noisy data $X_{t-1}$ that is not an exact permutation. Furthermore, it is also computationally expensive to project it to a distribution over $S_n$. \\n\\nDiscrete diffusion methods like D3PM, which model categorical distributions, are also unsuitable for $S_n$. These methods require explicit matrix multiplications involving $n!\\\\times n!$ transition matrices. While D3PM uses dense transition matrices such as uniform or discretized Gaussian distributions, performing dense matrix multiplications at this scale is impractical. \\n\\nOur proposed method addresses these challenges by defining efficient, customized transition distributions through card-shuffling methods. This approach avoids explicit matrix multiplications by directly simulating the forward process using the efficient operations of shuffling methods. Essentially, the shuffling methods induce \\u201csparse\\u201d transitions on $S_n$, resolving the efficiency issues inherent in existing discrete diffusion models. As our framework is fundamentally different and existing frameworks are infeasible for $S_n$, our baselines focus on comparing different shuffling methods within our framework.\\n\\n> **Q2:** How do symmetric diffusers handle scalability as increases, and what strategies do you envision for managing the factorial growth in the state space for large permutations? An empirical analysis or theoretical discussion on the scalability limits and potential optimizations (e.g., sparse transition matrices or modular architectures) would provide greater insight into the model\\u2019s practicality for large-scale tasks.\\n> \\n\\n**A2:** First, our method significantly improves the scalability of existing approaches. As evidenced by prior work, scalability remains a highly challenging problem in permutation learning. For instance, in the 4-digit MNIST sorting experiment, baseline methods are only effective for sequence lengths up to 32. In contrast, our method achieves promising results for lengths up to 200, outperforming these baselines by a large margin.\\n\\nSecond, as mentioned in A1, shuffling methods serve as efficient \\\"sparsification\\\" of D3PM transition matrices, which is crucial to our scalability improvement. While our framework has achieved significant progress, there is definitely lots of room (e.g., better parameterization of forward/reverse diffusion steps, better neural network architecture, and so on) for further scalability improvement, and it would be an exciting topic for future research. \\n\\n> **Q3:** How significant is the denoising schedule's impact on computational efficiency and model performance? Have you considered alternative denoising schedules that might further improve efficiency?\\n> \\n\\n**A3:** The denoising schedule is an important hyperparameter in our model. We have already conducted an ablation study on the denoising schedule and provided a detailed discussion in Appendix G.2 and Table 6. We have also provided a theoretical justification and empirical guidelines for choosing the denoising schedule in Section 3.4 of the main paper.\\n\\n> **Q4:** Could you clarify certain technical details of the forward and reverse diffusion processes, especially for readers unfamiliar with symmetric groups? \\u2026 To make these sections more accessible to readers, illustrative examples for the shuffling and reverse diffusion steps or a more detailed explanation of terms such as \\u201cstationary distribution\\u201d and \\u201cmixing time\\u201d could be added.\\n> \\n\\n**A4:** Thank you for the suggestion. We will include additional explanations and illustrative examples of these concepts in the final version of the paper to improve accessibility.\"}",
"{\"comment\": \"Thank you for your response. Indeed, OOD performance can be taken up as a separate research topic.\\n\\nAs I did not have any major concerns to begin with, I intend to maintain my current score of 8. \\n\\nCongratulations on your great work!\"}",
"{\"title\": \"Response to Reviewer H6zv (Part 2/2)\", \"comment\": \"> **Q5:** Can symmetric diffusers be adapted for other structured discrete data beyond permutations, such as specific types of graph structures or other combinatorial tasks?\\n> \\n\\n**A5:** Our model is directly applicable if a combinatorial problem with other structured discrete data can be equivalently reformulated using permutations. For instance, permutations can be used to represent solutions to TSP on graphs and exact graph matching (or graph isomorphism testing). For combinatorial problems that can not be reformulated using permutations, permutations could still be instrumental, e.g., generating expander graphs and various assignment problems. Given the central role of permutations in various combinatorial problems, our model has the potential to address a wide range of tasks involving structured discrete data.\\n\\n> **Q6:** Could you provide more insights or experiments to demonstrate the practical benefits of using the generalized Plackett-Luce (PL) distribution over the standard PL model?\\n> \\n\\n**A6:** We have conducted the ablation studies and experiments in Appendix G.2. As seen from the results, the GPL distribution has better performance when sorting 4-digit MNIST numbers with $n=52$ . \\n\\nFrom a theoretical perspective, the standard PL distribution cannot represent a delta distribution, which is the ground truth for many problems. During the rebuttal period, we further proved that the reverse process using GPL can represent **any** distribution over $S_n$, which is a significant result regarding the expressiveness of the reverse process. This result is formalized in Theorem 2 **(colored blue)** and proved in Appendix E of the newly uploaded paper. We also provide an example illustrating the idea we used in the proof in Figure 3 and lines 895 to 904 in Appendix E.\"}",
"{\"title\": \"Raised Score\", \"comment\": \"I thank the authors for their detailed answers and I have raised my score to 8.\\nI am glad to hear that the authors are conducting further experiments on TSP and I am curious to see what comes out in the large-scale TSP experiments!\", \"i_have_a_follow_up_question\": \"You are writing that the framework from Sanokowski et al. 2024 can be used \\\"as long as we pick a reverse transition such that the Shannon entropy $S(q_\\\\theta(X_{t-1}|X_t))$ is tractable\\\".\\nWhat do you exactly mean by that? I see that in Sanokowski et al. 2024 $q_\\\\theta(X_{t-1}|X_t)$ is chosen so that $S(q_\\\\theta(X_{t-1}|X_t))$ can be calculated exactly. But it is not a strict requirement as you can alternatively Monte Carlo estimate the entropy with $S(q_\\\\theta(X_{t-1}|X_t)) = - E_{X_{t-1} \\\\sim q_\\\\theta(X_{t-1}|X_t)} [ \\\\log q_\\\\theta(X_{t-1}|X_t) ]$, which is possible as long as $q_\\\\theta(X_{t-1}|X_t)$ can be evaluated.\\nSo my question reduces to whether your proposed reverse diffusion transitions $q_\\\\theta(X_{t-1}|X_t)$ can be evaluated?\\n\\nFor me what you write down in L. 350 ff. \\\"Note that although we may not have the analytical form of $q(X_t|X_{t-1})$, we can draw samples from it.\\\" would rather be a show stopper of the framework from Sanokowski et al. 2024, because $q(X_t|X_{t-1})$ must be analytically available.\"}",
"{\"comment\": \"Thank you for the prompt response. I understand your claim that an algorithm which works on $S_n$ naturally can be expected to be better than off the shelf algorithms which can work on the product space. However, my point was that this fact has to be established with rigorous experiments -- since this is one of the fundamental motivations behind the present work.\", \"regarding_scale\": \"SEDD (and PLAID, MD4, GGM etc ) works on language modeling tasks with a sequence length of ~1000 and a vocabulary length of ~30000 and is certainly not a product distribution (there are several places where grammar imposes strict constraints and structure). Therefore, one could imagine them performing well in planning tasks as well. My point with citing several recent works (and older ones) was that regular diffusion models have been used in planning tasks quite successfully and therefore make for a strong baseline (whether SEDD or GGM or D3PM based works).\\n\\nIncluding at least one or two of those would strengthen your argument. Looking forward to the results (if ready by then). I will discuss further with the AC and other reviewers based on this. I want to stress that I like this line of work and find it very promising.\"}",
"{\"title\": \"Further Response Part 2/2: Other Discrete Diffusion Baselines\", \"comment\": \"We then address your concerns on other discrete diffusion baselines. For all of the work you referenced [2, 3, 4, 5], whether or not they assume conditional independence or factorization, the diffusion trajectory would not be restricted to $S_n$ in our problems. You are correct that as long as the algorithm outputs a permutation at time 0, then it is not necessary for the entire trajectory to be restricted to $S_n$. However, if we do not impose the constraint of having the trajectory to be in $S_n$ in the design of the algorithm, there is a high chance that the final sample of the algorithm would not be a permutation, especially for large scale problems. In [4], although tasks such as solving Sudokus or SAT problems indeed has lots of structures and constraints in their outputs, the search spaces of these problems are not significantly large. For the Sudoku task, we manually inspected their dataset and found that in the inputs, there are on average 30 initially filled cells out of the 81 cells. So the search space for Sudoku has size around $9^{50}<40!$, and our model considered search spaces with sizes up to $|S_{200}|=200!$ in the experiments. For the SAT problem, [2] considered 9 variables at most with $2^9=512$ search space size, which is also significantly smaller than the search space we considered in our paper. Intuitively, the combinatorial structure (e.g., permutation in our case) of the problem should only help us solve the problem. If we disregard the underlying structure in our models, it would be hard to scale up.\\n\\nFor large-scale permutation learning problems, if the final sample is not a permutation, then we need to perform a projection onto $S_n$. This is a non-trivial convex optimization task which requires iterative optimization solvers. This additional projection step not only slows down the sampling process, but the distribution over $S_n$ after the projection is also not guaranteed to be correct.\\n\\nNevertheless, we are glad to experiment with the baselines you proposed. Due to time constraints, we are currently focusing on adapting Score Entropy [3] to our experiment setup, and we will post a follow-up response if our experiments finish before the discussion ends.\\n\\nFinally, **we note that [2] and [4] are also concurrently under review in ICLR 2025 (submission number 9621 and 4441, respectively)**, so we kindly point out that it is unfair to request a comparison with them.\\n\\n\\n### References\\n\\n[2] Glauber Generative Model: Discrete Diffusion Models via Binary Classification by Varma et al.\\n\\n[3] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution\\n\\n[4] BEYOND AUTOREGRESSION: DISCRETE DIFFUSION FOR COMPLEX REASONING AND PLANNING\\n\\n[5] LayoutDM: Discrete Diffusion Model for Controllable Layout Generation\"}",
"{\"summary\": \"The paper presents SymmetricDiffusers, a novel discrete diffusion model for learning distributions over permutations within symmetric groups. This model addresses the complexity of directly modeling the vast and discrete state space of permutations by decomposing the task into simpler, more manageable transitions.\", \"key_contributions_include\": \"1) Forward Diffusion Process Using Card Shuffling Methods: Symmetric diffusers introduce noise to permutations using classical shuffling methods (riffle shuffles, random transpositions, and random insertions). These methods facilitate a gradual transformation toward a known noise distribution, simplifying the learning process.\\n2) Generalized Plackett-Luce (PL) Distribution for Reverse Diffusion: To return the noisy state to its original distribution, the model leverages a neural network-based generalized PL distribution, enhancing expressiveness and effectively reconstructing complex dependencies within permutations.\\n3) Theoretically Grounded Denoising Schedule: An optimized denoising schedule merges reverse steps to boost sampling efficiency and learning performance, reducing computational requirements without sacrificing accuracy.\\n\\nThe model demonstrates state-of-the-art or comparable results in tasks such as sorting, jigsaw puzzle assembly, and solving traveling salesman problems, validating its effectiveness in permutation-based applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality: This paper demonstrates notable originality by advancing discrete diffusion models to tackle the problem of learning distributions over permutations in symmetric groups. The task of modeling permutations is inherently challenging due to the factorial growth of the state space, and SymmetricDiffusers introduces an innovative solution by utilizing card shuffling techniques (riffle shuffle, random transpositions, and random insertions) as part of a structured forward diffusion process. Combining classic combinatorial methods with modern neural-based diffusion modeling is an inspired choice well-suited to the discrete and combinatorial nature of permutations. Compared to related works on discrete diffusion\\u2014such as Discrete Denoising Diffusion Probabilistic Models (D3PMs), which focus on multinomial categories\\u2014SymmetricDiffusers addresses a unique domain with a permutation-focused framework that current D3PMs do not target.\\n\\nFurthermore, introducing a generalized Plackett-Luce (PL) distribution for the reverse process sets this work apart from other discrete diffusion models, such as score-based continuous-time discrete-state models, which operate on categorical data but lack this flexibility. The generalized PL distribution is well suited for structured dependencies in permutations, enabling greater expressiveness and more accurate learning over complex permutations.\", \"quality\": \"The paper demonstrates high methodological rigor. Each component of SymmetricDiffusers\\u2014the forward process using card shuffling, the generalized PL distribution for reverse diffusion, and the denoising schedule\\u2014is technically well developed and grounded in established theories of random walks and Markov chains on finite groups. This adds a layer of mathematical credibility to the proposed model, particularly in the careful treatment of transition probabilities and mixing times associated with the shuffling methods.\\n\\nAdditionally, the experiments are comprehensive and well-chosen to validate the model across diverse tasks, such as sorting, jigsaw puzzle completion, and traveling salesman problems (TSPs). This diversity of functions effectively showcases the robustness and generalizability of SymmetricDiffusers, achieving state-of-the-art or comparable results in each case. Unlike other discrete diffusion models focused on simpler domains (e.g., categorical image data), SymmetricDiffusers\\u2019 application to more complex combinatorial problems uniquely highlights the model\\u2019s practical value and robustness.\", \"clarity\": \"The paper is generally well-organized and easy to follow, with a clear introduction to the challenges and requirements of permutation modeling. Complex ideas are supported by notations, figures, and a well-structured flow that gradually builds the reader\\u2019s understanding from the background on symmetric groups to the technical construction of SymmetricDiffusers. Visualizing the forward and reverse processes in figure form is particularly helpful, illustrating the model\\u2019s approach to diffusion over permutations.\\nHowever, compared to some other papers on discrete diffusion (e.g., D3PMs), certain sections\\u2014particularly those explaining the forward and reverse processes in detailed mathematical terms\\u2014might benefit from additional simplification for readers less familiar with random walks on finite groups. Nevertheless, the overall clarity and structure make the contributions accessible and understandable.\", \"significance\": \"SymmetricDiffusers addresses an important area in machine learning by advancing the state of permutation modeling within discrete diffusion frameworks. Distributions over permutations are critical in domains like ranking, combinatorial optimization, and sequence alignment, and current models struggle with the computational complexity posed by large discrete spaces. This work opens up new possibilities for generative modeling within these areas, especially with its robust performance in tasks requiring complex combinatorial reasoning (e.g., TSP).\\n\\nCompared to recent discrete diffusion works on more general categorical or sequential data, this paper contributes uniquely to discrete generative modeling by directly addressing permutation structures. This is significant because it establishes a pathway for diffusion models to utilize high-complexity tasks beyond traditional applications effectively. By providing an effective means of modeling permutations, SymmetricDiffusers could substantially impact practical applications and inspire future research in discrete-state diffusion for structured data.\\n\\nThe paper\\u2019s strengths lie in its original approach, which combines structured combinatorial methods with diffusion modeling. It also has high-quality methodological rigor, clearly presents complex concepts, and significantly contributes to modeling distributions over large permutation spaces, positioning it as a valuable advancement in discrete diffusion research.\", \"weaknesses\": \"Limited Comparative Analysis with Other Discrete Diffusion Models: While SymmetricDiffusers clearly advances permutation modeling, the paper could benefit from a more detailed comparison with other discrete diffusion models, particularly those handling high-dimensional categorical or sequential data, such as Discrete Denoising Diffusion Probabilistic Models (D3PMs). The current related work section mentions other models briefly but does not delve into how SymmetricDiffusers directly compares in terms of handling large discrete spaces. Including more detailed experiments or ablation studies to demonstrate SymmetricDiffusers\\u2019 advantages over such models regarding efficiency or performance on simpler permutation tasks could clarify its relative strengths and limitations.\", \"lack_of_exploration_on_scalability_to_larger\": \"The factorial growth of permutations means that scaling to larger sizes can become computationally intensive. The paper currently demonstrates its model on small-to-moderate values (e.g., tasks like 4-digit MNIST sorting). However, it does not provide clear insights into the scalability limits or potential strategies for scaling SymmetricDiffusers to larger values, where the computational load may become a bottleneck. Future work should consider including benchmarks on larger values or discussing ways to optimize the model\\u2019s performance, such as using sparse transition matrices or adopting modular architectures.\", \"clarity_in_the_forward_and_reverse_processes\": \"While the paper is generally well-structured, some sections\\u2014particularly those detailing the forward and reverse diffusion processes\\u2014are highly technical and may be challenging for readers less familiar with symmetric groups and permutation modeling. Additional clarifications or simplified explanations could improve accessibility. Specifically, breaking down the mathematical formulations for each shuffling method and reverse diffusion process with more illustrative examples would make these sections easier to understand. Clearer definitions of key terms, such as \\u201cstationary distribution\\u201d and \\u201cmixing time\\u201d in the context of random walks, could also make the content more accessible to a broader audience.\", \"efficiency_of_the_denoising_schedule\": \"The paper introduces a theoretically grounded denoising schedule to merge reverse steps and improve efficiency, but it lacks concrete benchmarks or ablation studies to assess its impact. Comparing SymmetricDiffusers with and without the denoising schedule in terms of computational time and performance would provide readers with a clearer understanding of its practical benefits. Additionally, exploring alternative denoising schedules or adaptive strategies that adjust based on task complexity could further optimize the model\\u2019s performance.\", \"broader_applicability_and_practical_implications\": \"Although SymmetricDiffusers demonstrates promising results in permutation-specific tasks (sorting, TSP, jigsaw puzzles), the paper could better communicate its broader applicability and potential limitations. For example, could the model be applied to non-permutation-based tasks with discrete structures, such as certain types of graph-based tasks? A brief discussion of the boundaries of SymmetricDiffusers\\u2019 applicability and how it might adapt to related yet distinct discrete structures would clarify its versatility and limitations.\", \"suggestions_for_improvement\": \"1) Enhance comparative experiments by including Symmetric Diffusers alongside other discrete diffusion models (e.g., D3PMs) on more straightforward permutation tasks for more explicit benchmarks.\\n2) Expand on the model\\u2019s scalability for larger spaces, possibly with benchmarks on larger permutation spaces or theoretical discussions on extending to high-dimensional settings.\\n3) Provide additional clarifications and illustrative examples for complex forward and reverse diffusion processes sections.\\nInclude an ablation study on the denoising schedule to quantify its impact on performance and efficiency.\\n4) Discuss the model's broader applicability to other discrete structures beyond permutations, providing readers with insight into its potential versatility.\", \"questions\": \"Comparative Analysis with Other Discrete Diffusion Models:\", \"question\": \"Could you provide more insights or experiments to demonstrate the practical benefits of using the generalized Plackett-Luce (PL) distribution over the standard PL model? In which cases does the generalized PL significantly enhance performance?\", \"suggestion\": \"Examples or comparisons that highlight the generalized PL distribution\\u2019s expressiveness\\u2014especially in complex permutation tasks\\u2014would illustrate its value over standard PL models.\", \"scalability_for_larger\": \"\", \"denoising_schedule_efficiency\": \"\", \"forward_and_reverse_diffusion_process_clarifications\": \"\", \"broader_applicability_and_adaptation_to_other_discrete_structures\": \"\", \"generalized_plackett_luce_distribution_benefits\": \"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper explores a novel problem: learning probability distributions over finite symmetric groups. As an example, the problem can be thought of as learning to sort, with important differences:\\n\\n- instead of finding just the most optimal permutation, the focus is on learning a *distribution* over all possible rankings, so that rankings \\\"closer\\\" to the optimal permutation are more likely to be generated.\\n- the problem formulation is general enough to cover rankings over any finite group of elements and not just a set of $n$ integers as long as one trains the algorithm using appropriate data (for instance, pictures of numbers instead of just numbers). \\n\\nThe authors propose *SymmetricDiffusers*, a discrete diffusion model (with a transformer-based architecture in this case) trained to recover the target permutation in several steps after the original (target permutation) is converted into noise over a number of steps. This decomposition of the recovery process eases the otherwise very difficult problem of learning to directly come up with the optimal permutation out of $n!$ possible states.\\n\\nWith this context, the authors' study reveals several insights into potentially best practices with regard to the training process, *e.g.,* choice of the forward shuffling method (riffle shuffle because of its fast mixing time), and when it might be feasible to merge steps during the denoising process. The authors validate their findings via state-of-the-art results on three benchmarks: MNIST sorting, jigsaw puzzles, and the Travelling Salesman Problem.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The style of writing is extremely clear and structured.\\n2. The paper makes a wealth of interesting contributions: introducing a novel perspective on an important problem that is currently underexplored in the ML community; proposing a method to solve said problem; revealing insights that will help further research; and finally, validating their method on three benchmarks. The authors also share their code. The general nature of the problem means that the findings here can potentially open up a whole new set of possibilities.\", \"weaknesses\": \"I don't see obvious weaknesses. But, as the authors also acknowledge (in the conclusion and Appendix G), there is potential for improving scalability (w.r.t. $n$) and possibly extending the method to finite groups beyond $S_n$.\\n\\n**(Nit: typos)**\\n- Abstract (line 009): groups --> group?\\n- line 472: performances --> performance?\", \"questions\": \"1. Have the authors considered evaluating OOD performance (e.g., feeding colored, or otherwise font-shifted MNIST into a model that was trained on grayscale images) ? Do they anticipate a drop in performance in that setting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"Thanks to the authors for the response. I found it clear, and maintain my accept rating. For whatever it's worth, I'd also encourage the authors to include more of A1 in the manuscript, or at least in any future poster or presentation, as it creates a stronger narrative background for their technical contributions.\"}",
"{\"summary\": \"This paper presents a discrete diffusion model for learning a distribution over permutations ($S_n$). They present several choices of forward noising process, including transpositions, insertions, and the rifle shuffle. For the reverse noising process, they define inverse processes corresponding to each of the three proposed forward processes, as well as a generalization of the Plackett-Luce Distribution that is more expressive than the original (e.g. it can represent a delta distribution). They train their diffusion models via a variational lower bound, estimated via Monte Carlo samples since they cannot obtain an analytic form. They derive a noise schedule, based on merging adjacent diffusion steps for certain inverse distributions, and run experiments comparing different versions of their diffusion model to differentiable sorting methods and test tasks including sorting MNIST digits and solving traveling salesman instances.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This is the first work to define a diffusion model over $S_n$. Their enumeration, and mathematical treatment, of possible forward and reverse noising processes is thorough, as are the experiments and ablations. The presentation is quite clear in general. The application of a diffusion model to problems that require outputting a permutation in a differentiable matter \\u2014 even if a distribution over $S_n$ is not strictly required \\u2014 seems to be creative and novel. The experimental results are generally good, particularly at increased sequence lengths.\", \"weaknesses\": \"To me, the main piece missing from this paper is a motivation of why machine learning applications require generative models over permutations in the first place, rather than just outputting a single permutation, which is what the tested datasets seem to satisfy. Is this related to differentiability? The introduction is well-written, but would improve from being more concrete and grounded in the machine learning literature. To be specific, the claim on line 35 that \\u201cTherefore, studying probabilistic models over $S_n$ through the lens of modern machine learning is both natural and beneficial\\u201d feels a bit unjustified. Differentiable sorting is mentioned in the related works, but a discussion of why \\u201csuch methods often focus on finding the optimal permutation instead of learning a distribution over the finite symmetric group\\u201d and what the tradeoffs are would be helpful.\\n\\nThe experiments also show stronger performance for longer sequence lengths, but the quadratic scaling with longer sequence lengths remains an open direction of improvement.\", \"questions\": \"1. As discussed in the Weaknesses section, what exactly is the motivation of using diffusion model for these problems, if ultimately only a single learned permutation is required per input? This is my primary question.\\n2. For the right choice of parameters, can the reverse processes actually represent the exactly correct distributions induced by the corresponding forward diffusion process?\\n3. One minor point of confusion was that the abstract claims to learn a distribution over $S_n$, but the concrete objects that are dealt with are ordered sets of objects (stored in an $n$ by $d$ matrix). Would it be accurate to refer to this method as *conditional* diffusion? If not, how could the architectures best be modified to output a distribution over raw permutation matrices? \\n4. As noted on line 154, $\\\\mathcal{S}$ does not change across steps \\u2014 why enforce this for diffusion models? Does this make something easier? Is it potentially restrictive in terms of what distributions can be represented after a given number of steps?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper studies the important question of sampling from the finite symmetric group $S_n$ via diffusion models. This is structurally different from prior works on diffusion models which concentrate on sampling from product spaces (i.e, sampling without replacement). The authors clearly introduce various noising Markov chains for sampling a uniform distribution over $S_n$ and show how to parametrize the forward and reverse processes. This is the main technical contribution of the paper. The training loss is then based the variational lower bound of D3PM (Austin et al). The paper achieves strong performance in solving jigsaw puzzles of MNIST and CIFAR 10, sorting numbers from MNIST and solving TSP problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is clearly written and the Markov chains used to obtain uniformly random permutations are clearly surveyed. The construction of the parameterization of the reverse process and the loss function is nice and the algorithm outperforms existing methods in the empirical tasks.\", \"weaknesses\": \"1. Experiments are very small scale, comprising of sorting 4 digit MNIST, solving 20 node TSPs and solving jigsaw puzzles of CIFAR-10 data.\\n\\n2. There is no substantive theoretical contribution other than introducing the parametrization for the reverse processes. \\n\\n3. The reverse process for random transposition is not very expressive. Suppose the reverse transposition is $(1,2)$ with probability $0.5$ and $(2,3)$ with probability $0.5$. This simple distribution cannot be expressed using the model. \\n\\n4. Note that $S_n \\\\subseteq [n]^n$. If the input data comprises of only permutations, then the network should learn to sample from a distribution whose samples are permutations and the standard framework of diffusion models applies. This simple baseline has not been considered. \\n\\n[Q related to 4] The authors also mention that representing a transition matrix over $S_n$ requires $n!\\\\times n!$ sized matrix. However, the authors themselves give a succinct description/ representation of the forward transition matrix in the paper. The authors should elaborate why it is not possible to use this representation algorithmically.\\n\\n**Minor:**\\nIn proposition 1, should it be changed to \\\"the GPL distribution can represent a delta distributions in the limit\\\" instead of \\\"exactly\\\"?\\n\\n*additional references:*\\n[1] Generating a random permutation with random transpositions by Diaconis and Shahshahani\\n[2] Simplified and Generalized Masked Diffusion for Discrete Data by Shi et al.\\n[3] Glauber Generative Model: Discrete Diffusion Models via Binary Classification by Varma et al.\", \"questions\": \"Address the points raised in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
EO2hZTtK3M | CM^2: Cross-Modal Contextual Modeling for Audio-Visual Speech Enhancement | [
"Feixiang Wang",
"Shuang Yang",
"Shiguang Shan",
"Xilin Chen"
] | Audio-Visual Speech Enhancement (AVSE) aims to improve speech quality in noisy environments by utilizing synchronized audio and visual cues.
In real-world scenarios, noise is often non-stationary, interfering with speech signals at varying intensities over time.
Despite these fluctuations, humans can discern and understand masked spoken words as if they were clear.
This capability stems from the auditory system's ability to perceptually reconstruct interrupted speech using visual cues and semantic context in noisy environments, a process known as phonemic restoration.
Inspired by this phenomenon, we propose Cross-Modal Contextual Modeling (CM$^2$), integrating contextual information across different modalities and levels to enhance speech quality.
Specifically, we target two types of contextual information: semantic-level context and signal-level context.
Semantic-level context enables the model to infer missing or corrupted content by leveraging semantic consistency across segments.
Signal-level context further explores coherence within the signals developed from the semantic consistency.
Additionally, we particularly highlight the role of visual appearance in modeling the frequency-domain characteristics of speech, aiming to further refine and enrich the expression of these contexts.
Guided by this understanding, we introduce a Semantic Context Module (SeCM) at the very beginning of our framework to capture the initial semantic contextual information from both audio and visual modalities.
Next, we propose a Signal Context Module (SiCM) to obtain signal-level contextual information from both raw noisy audio signal and the previously acquired audio-visual semantic-level context.
Building on this rich contextual information, we finally introduce a Cross-Context Fusion Module (CCFM) to facilitate fine-grained context fusion across different modalities and types of contexts for further speech enhancement process.
Comprehensive evaluations across various datasets demonstrate that our method significantly outperforms current state-of-the-art approaches, particularly in low signal-to-noise ratio (SNR) environments. | [
"Speech Enhancement",
"Audio-Visual",
"Contextual Modeling"
] | Reject | https://openreview.net/pdf?id=EO2hZTtK3M | https://openreview.net/forum?id=EO2hZTtK3M | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"rPhNwaiYPm",
"q94okFsi3y",
"gNla3AvCYX",
"egr8XDsdTq",
"WmhzEqEGeg",
"OvuM7ERMo3",
"NhlMHKdebh",
"L9B0doi5aP",
"G2H7FojbDw",
"FbEVS7qnQS",
"CZyzfG3stv",
"4W7phOD2Fl"
],
"note_type": [
"decision",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1737523618150,
1734619066697,
1731360069404,
1731905735273,
1733112558680,
1730554274633,
1732559249962,
1730441096061,
1731905786481,
1731905575973,
1733112515284,
1730559188545
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission4079/Area_Chair_n7sr"
],
[
"ICLR.cc/2025/Conference/Submission4079/Reviewer_NSCs"
],
[
"ICLR.cc/2025/Conference/Submission4079/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4079/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4079/Reviewer_s61k"
],
[
"ICLR.cc/2025/Conference/Submission4079/Reviewer_6pDV"
],
[
"ICLR.cc/2025/Conference/Submission4079/Reviewer_fg1Q"
],
[
"ICLR.cc/2025/Conference/Submission4079/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4079/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4079/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4079/Reviewer_6pDV"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"metareview\": \"This paper presents Cross-Modal Contextual Modeling (CM^2) to improve Audio-Visual Speech Enhancement (AVSE). The idea is to use visual information to de-noise or otherwise improve the quality of audible speech. CM^2 integrates two types of contextual information\\u2014semantic and signal context. The semantic context helps the model infer missing or corrupted speech by maintaining consistency across segments, while the signal context leverages coherence within signal frames. The approach exploits the correlation between visual features, such as speaker's facial cues, and audio frequency characteristics, to aid the enhancement process.\", \"the_model_consists_of_three_main_components\": \"a Semantic Context Module (SeCM) for initial contextual extraction, a Signal Context Module (SiCM) for signal-level context from noisy inputs, and a Cross-Context Fusion Module (CCFM) to combine these contexts. This architecture allows for detailed context fusion across different modalities, to improve speech clarity. Experimental results show that CM^2 outperforms other state-of-the-art models such as RTFD-Net, demonstrating substantial gains in metrics related to speech quality and intelligibility (SDR, PESQ, and STOI).\\n\\nStrengths\\n- the paper discusses a relevant problem and the addition of AV-Hubert as a feature extractor is new\\n- the paper is generally well written and easy to understand\\n\\nWeaknesses\\n- Authors do not provide human validations of the improved intelligibility and speech quality (only automatic metrics)\\n- Stronger & more recent baselines such as RTFSNet are missing (from the paper). \\n\\nAddressing the two weaknesses is a recommended step before acceptance for publication can be recommended.\", \"additional_comments_on_reviewer_discussion\": \"One reviewer initially considered similarities between the present paper and other published work a potential ethics violation, since some references were not clearly marked. This issue could be resolved as a mis-understanding, and further discussion focused on actual similarities and their impact. Authors however did not actually include comparisons to more recent work in their manuscript.\"}",
"{\"summary\": \"The paper proposed a deep learning model called Cross-Modal Contextual Modeling (CM^2), which utilizes both audio and visual cues to enhance speech quality in noisy environments. CM^2 combines two types of contextual information: semantic context and signal context. Experimental results show superior model performance over existing works.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The motivation is sound and clear\\nThe paper reports SOTA performance on all metrics including SDR, PESQ, and STOI.\", \"weaknesses\": \"The idea of extracting and integrating semantic context, signal context, and visual frequency (as shown in figure 1) is quite interesting, but the connections between these components and the proposed model modules/ architecture are not so strong/ clear. Semantic context is extracted from a pretrained model (visual or audio-visual model, e.g. AVHuBERT), while signal context is simply like a fusion module of noisy audio input and the semantic features.\\nThe proposed model architecture consists of many components including SeCM, CCFM, SiCM, and a pretrained model like AVHuBERT; it was trained with a loss function of magnitude spectrogram loss, complex spectrogram loss, and adversarial loss. Thus, it is hard to highlight the importance of each component.\\nExperimental results mainly focus on model performance on speech enhancement with metrics of SDR, PESQ, and STOI. However, analysis of the model resource consumption including memory and computational cost (at training and inference) could be beneficial.\", \"questions\": \"About evaluation, how do you make comparision with previous works under different signal-to-noise ratio conditions as I don't see this type of evaluation on most of the previous works?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"a) Firstly, while both RTFS-Net and our method utilize pre-trained models to extract visual features, the ways we use them, the methods by which we obtain these features, and our objectives all vary substantially. RTFS-Net hopes to extract target speaker information from visual lip movements with the pre-trained lipreading network. Conversely, our approach integrates the multi-modal semantic information from both noisy audio and visual inputs by the self-supervised audio-visual speech model (AV-HuBERT), which is used to guide and enhance the subsequent construction of signal-level context. Utilizing AVHuBERT or similar models to encode visual information is common in many AVSE/AVSS applications. For instance, AV-Gen in 2023 (https://arxiv.org/abs/2306.01432) employs AVHuBERT to derive visual embeddings, where the visual features of each layer in AVHUBERT would be weighted to sum to inform an audio-based Score Modal for AVSE; SSL-AVSE in 2023 (https://arxiv.org/abs/2307.07748) leverages AVHuBERT as a front-end encoder for audio-visual feature encoding, where the audio-visual features in each layer are weighted to sum to feed a decoder for AVSE. These implementations are distinctly different from ours in both purpose and the subsequent manner to use the obtained features.\\n\\n b) Secondly, as detailed in Section 3.2, our audio encoder consists of a validated structure comprising four convolutional blocks. This configuration was determined through our experiments and is independent of the audio encoder used in RTFS-Net.\\n\\n c) Thirdly, concerning the modal fusion module, we highlight three key differences between our CM$^2$ module and RTFS-Net:\\n\\n i) Overall, our CCFM is not a typical modal fusion module. CCFM is a context fusion module, as detailed in Sections 1 and 3.4, differing significantly from RTFS-Net\\u2019s CAF Block in both target and specific design. CCFM focus on using semantic context to guide signal context modeling. Initially, it upsamples the multi-modal semantic context, and then combines it with the signal context at the channel level. This is followed by a fine-grained fusion process, implemented in an attention-like manner, to enhance integration. This approach is totally different from RTFS-Net\\u2019s method of Gated Fusion for audio-visual features, as evidenced by the distinct methodologies depicted in our Figure 3 compared to RTFS-Net Figure 2.\\n\\n ii) Moreover, we emphasize the impact of speaker\\u2019s visual appearance on the audio frequency domain. As mentioned in the previous point 2, most TF-domain AVSS / AVSE methods, including RTFS-Net, focus only on fusing audio-visual features with the same dimension. These methods typically upscale the visual features in temporal domain and then repeat them across the frequency dimension, with the target of just obtaining the same dimension as audio features in the TF-domain. In contrast, we intentionally introduce the learnable upsample to the audio-visual semantic context features in both time and frequency dimensions. Then, in addition to the fusion module on the time dimension, a frequency dimension fusion module is applied for deeper integration of the two types of context. Table 6 demonstrates the effectiveness of this design.\\n \\n iii) The channel operations in our CCFM's Channel Swapping Block (CSBlock) differ significantly from those in RTFS-Net. RTFS-Net's operations are concentrated on the real and imaginary components of the audio spectrum, assigning the first half channel dimensions of audio features to the real part and the second half to the imaginary, then employing a complex multiplication-like method for spectral output. However, as detailed in Section 3.4, our CSBlock operates under our premise that different channel information highlights different aspects, and redundant information in a channel of one modality might be complementary for the other modality. Hence, we swap half the channels between two modalities to facilitate preliminary fusion. This method is distinct from RTFS-Net.\\n\\n d) In discussing Time-Frequency (TF) domain modeling, it is important to note that modeling time and frequency separately is a standard practice in SE and SS tasks, as seen in TF-Gridnet (2022-9-8 released, ICASSP 2023) and RTFS-Net(2023-9-29 released, ICLR 2024), and earlier in CMGAN (2022-3-28 released, Interspeech 2022, which is already cited in our paper) (https://arxiv.org/abs/2203.15149). Moreover, the TF domain modeling itself is not among our claimed contributions. Instead, we emphasize the significant role of the visual modality in restoring audio frequency domain characteristics, as detailed previously in point 2.\\n\\n e) Our decoders are implemented based on CMGAN and are totally unrelated to RTFS-Net or TF-Gridnet.\\n\\n f) We implemented a discriminator inspired by CMGAN and Metric GAN, transforming the non-differentiable metric PESQ into a training target. Neither RTFS-Net nor TF-Gridnet employs a similar approach.\", \"title\": \"Response to Reviewer 6pDV for Research Integrity Issues\\uff082/3\\uff09\"}",
"{\"title\": \"Response to Reviewer 6pDV (2/2)\", \"comment\": \"3. **Regarding the citations of prior works.**\\n\\nFirstly, we must clarify that we are more than willing to include relevant references in the subsequent revision of our paper. However, it is unreasonable and unfair to escalate this to ethical issues simply because we did not cite the mentioned Speech Separation studies. All the mentioned papers in your previous comment belong to the field of Speech Separation (SS), whereas our paper focused on Speech Enhancement (SE). These two fields address fundamentally different problems. Generally, papers on SE do not frequently cite SS literature, and vice versa. For instance, in our literature review, RTFS-Net from the speech separation field cited 13 SS papers but only 2 SE papers. Similarly, CMGAN from the speech enhancement field cited 6 SS papers and 20 SE papers. Our own work, CM\\u00b2, cited 7 SS papers and 32 SE papers. Suggesting that our failure to cite speech separation papers indicates ethic issue is unjustifiable.\\n\\nRegarding the works mentioned by other reviewers, the main reasons are twofold: to enhance our work by adding additional performance comparisons and to enrich it by including more references. The reasons are not that our ideas are similar to or replicate existing studies. Specifically, the two works mentioned by Reviewer s61k, namely LA-VocE and GCRN, were not previously cited because they represent a different methodology and are not closely related to our method. Nevertheless, we are happy to adopt the reviewer's suggestion to include a performance comparison with these two works. Regarding the comments from Reviewer fg1Q about Dual-path Mamba, SPMamba, TF-Gridnet, and RTFS-Net, we did not previously cite these as they primarily focus on speech separation. However, we are more than willing to reference them to enrich our paper. As for the AVDPRNN (https://www.isca-archive.org/avsec_2024/gogate24_avsec.pdf) and A-V Demucs (https://www.isca-archive.org/avsec_2024/tiwari24_avsec.pdf) mentioned by Reviewer fg1Q, these papers were published just before the ICLR 2024 submission deadline (September 1, 2024). It is generally understandable that such recently published works were not cited. We are grateful for all the reviewers' suggestions regarding additional references. These will certainly help us improve our work. As for the time-frequency alternating modeling methods mentioned by Reviewer fg1Q, we have already cited the CMGAN which is an earlier work that adopted this approach in the field of Speech Enhancement.\\n\\nWe hope the above explanations address your concerns. We once again kindly request you revising the hurtful wordings in the previous review comments. Looking forward to your response.\"}",
"{\"summary\": \"This paper introduces a novel framework, called Cross-Modal Contextual Modeling ($CM^2$), for developing audio-visual speech enhancement technology. The $CM^2$ uses two types of contextual information: semantic and signal contexts. Semantic-level context enables the model to infer missing or corrupted content, leveraging semantic consistency across segments, whereas signal-level context further explores coherence within the signals developed from the semantic consistency. The $CM^2$ also incorporates visual information into the frequency domain and verifies that visual information plays a critical role in enhancing noisy speech.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is well-written and easy to understand.\", \"The authors clearly pointed out that most existing approaches focus solely on fusion in the temporal domain and overlook the potential correlations between the frequency dimensions of the visual and audio modalities.\", \"The authors conducted comprehensive experiments on ablation studies showing that the proposed $MC^2$ outperforms the previous AVSE methods.\"], \"weaknesses\": [\"The authors underscore the significance of visual information in recovering audio frequency domain information with the ablation study in Table 6. However, my concern is that the performance improvements are mainly due to the pre-trained visual encoder like AV-HuBERT, not the proposed CCFM (also in Table 10). I think it is better to provide more analysis on how the proposed CCFM actually boosts up speech enhancement performances without the largely pre-trained visual encoder. I also suggest the authors provide visualizations of the intermediate features produced by CCFM.\", \"While the task is audio-visual speech enhancement, the authors have not provided a demo video showing how clear and intelligible the output speech samples are. The desirable demo could be side-by-side comparisons with baseline methods. Furthermore, there is no human subjective MOS performance verifying that the enhanced output samples are actually better than those from the previous literature. I would suggest gathering a certain amount of participants to validate the enhanced speech sample by evaluating naturalness, intelligibility, etc.\", \"The authors only showed three different metrics while other previous papers have more. I encourage the authors to provide more quantitative performance metrics like MCD which is Mean Cepstral Distance measuring the difference between the spectral features of the synthesized speech and the target speech and ViSQOL which is Virtual Speech Quality Objective Listener to verify the proposed $MC^2$. I think MCD is important because this paper specifically underlines the importance of the frequency-domain characteristics of speech with visual appearance.\"], \"questions\": [\"### Additional Comments\", \"There are missing references like [1,2] that are well-known in speech enhancement tasks. Also, I would recommend the authors compare the proposed architecture with LA-VocE [1] since it's one of the recent state-of-the-art AVSE papers.\", \"Is there a reason that the authors use ISTFT not vocoder when converting the mel-spectrogram into the actual audio waveform?\", \"Besides the quantitative comparison, I am curious about the comparisons of inference times and numbers of parameters of the proposed model and other methods. Are those comparable?\", \"line 269: $CM_2$ -> $CM^2$?\", \"Please increase the line space between lines 425 - 426 for better readability.\", \"[1] Mira, Rodrigo, et al. \\\"LA-VocE: Low-SNR audio-visual speech enhancement using neural vocoders.\\\" ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023.\", \"[2] Tan, Ke, and DeLiang Wang. \\\"Learning complex spectral mapping with gated convolutional recurrent networks for monaural speech enhancement.\\\" IEEE/ACM Transactions on Audio, Speech, and Language Processing 28 (2019): 380-390.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"The points raised are valid, but do not address the problem.\\n\\nTo be clear, I think this work differs significantly from previous works and contributions. The introduced techniques are interesting, and I believe they will be helpful to future researchers. However, many of the techniques used have been introduced previously and are uncited in the manuscript.\\n\\nIn particular, the claim of ownership of the method of switching dimensions introduced in DPRNN. Specifically, in line 292, the authors state, \\\"we introduce a Channel Swapping Block (CSBlock).\\\" However, this approach is not the original work of $CM^2$, as it can be observed in the DPRNN paper (https://arxiv.org/abs/1910.06379) and source code at lines 312 and 319 (https://github.com/asteroid-team/asteroid/blob/154c52f6e15de9e42213b9997aa1a0ad8b0d453b/asteroid/masknn/recurrent.py#L319). This method has been employed consistently over the past five years, which was my primary reason for raising the ethics flag. \\n\\nThe subsequent channel-swapping operation is also reminiscent of equations 23 and 24 of RTFS-Net, and the overall pipeline of $CM^2$ aligns closely with other works such as RTFS-Net, TF-GridNet, CTCNet and others. Another reviewer has also noted the failure to cite other significant papers in the field, which lead me to mistakenly assign the work in this paper as original contributions introduced by $CM^2$, instead of by the original authors. This omission is concerning to me, as I believe this style of writing could easily mislead others in a similar way. \\n\\nSources should be properly cited and acknowledged. Despite this paper\\u2019s similarities to other work, with proper citing and referencing it would not be cause for concern.\"}",
"{\"summary\": \"The paper presents a novel approach called Cross-Modal Contextual Modeling (CM2) for Audio-Visual Speech Enhancement (AVSE). Inspired by the human auditory system's phonemic restoration, CM2 integrates semantic and signal-level contexts across audio and visual modalities to improve speech quality in noisy environments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduces a new approach to audiovisual speech enhancement (AVSE) by integrating semantic and signal context, inspired by phoneme recovery.\\n\\n2. The semantic context module (SeCM), signal context module (SiCM), and cross-context fusion module (CCFM) are clearly explained in the paper.\\n\\n3. The authors conduct detailed ablation experiments on the proposed modules, effectively illustrating the framework and its components.\", \"weaknesses\": \"1. The authors propose a SeCM block that integrates visual and auditory semantic content, but the method does not explain how E is obtained from V, PV, and PAV. The authors should provide a detailed explanation of this process. Additionally, using time-domain information as input increases the complexity of this part of the model.\\n\\n2. References to BiMamba should include [1,2] because these methods are the first to use bidirectional Mamba in the speech domain, which was not present in the original Mamba paper.\\n\\n3. The time-frequency alternating module is very common in the speech separation field, and the authors should reference related work. For example, TF-GridNet [3] and RTFSNet [4] use similar time-frequency alternating modules, which are very effective in multimodal speech enhancement.\\n\\n4. The multimodal speech enhancement methods compared are quite outdated. The authors should compare the latest methods (RTFSNet [4], [5], [6], etc.), as many new methods were proposed in 2024. Moreover, using numerous pre-trained models in the SeCM block results in a very complex model, which might not be optimal compared to current methods, as increased parameters can enhance model generalization. The authors should calculate the parameter count and computational load (MACs) of different models to more comprehensively demonstrate model performance.\\n\\n5. In Equation 8, the authors did not describe the meaning of the F function, which should be explained there.\\n\\n6. In line 278, the speech feature P should not be F and T; please correct this.\\n\\n7. Lines 424 and 425 have insufficient spacing and should be adjusted.\\n\\n[1] Jiang X, Han C, Mesgarani N. Dual-path mamba: Short and long-term bidirectional selective structured state space models for speech separation[J]. arXiv preprint arXiv:2403.18257, 2024.\\n\\n[2] Li K, Chen G. Spmamba: State-space model is all you need in speech separation[J]. arXiv preprint arXiv:2404.02063, 2024.\\n\\n[3] Wang Z Q, Cornell S, Choi S, et al. TF-GridNet: Integrating full-and sub-band modeling for speech separation[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023.\\n\\n[4] Pegg S, Li K, Hu X. RTFS-Net: Recurrent time-frequency modelling for efficient audio-visual speech separation[J]. arXiv preprint arXiv:2309.17189, 2023.\\n\\n[5] Tiwari U, Gogate M, Dashtipour K, et al. Real-Time Audio Visual Speech Enhancement: Integrating Visual Cues for Improved Performance[C]//Proc. AVSEC 2024. 2024: 38-42.\\n\\n[6] Gogate M, Dashtipour K, Hussain A. A Lightweight Real-time Audio-Visual Speech Enhancement Framework[C]//Proc. AVSEC 2024. 2024: 19-23.\", \"questions\": \"Please refer to above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"For the second question, you mentioned that the operation in the CSBlock was first introduced in DPRNN (2019) and subsequently used in DPTNet (2020), TF-Gridnet (2022), and RTFS-Net (2023). We have conducted extensive research and identified a significant misunderstanding on the part of the reviewer. As detailed in Section 3.4 and previously clarified, our CSBlock is specifically responsible for the preliminary fusion of two different levels of the context. This contrasts sharply with the single-modality models DPRNN, DPTNet, and TF-Gridnet mentioned by the reviewer. This fusion is based on our channel redundancy premise among different modalities as explained previously, and we did not identified any similar ideas or operations in these methods. The only possible similarity to RTFS-Net seems to be the point that both methods involve operations on channel dimensions; however, as we have already clarified as above, their approach to channel operations is different from ours.\\nRegarding the misunderstanding, we suspect that the reviewer may have confused our channel swapping operation with the segmentation operation introduced in DPRNN. This segmentation technique divides long input sequences into shorter chunks, which are then concatenated to form a 3-D tensor. Subsequently, two separate RNNs process these sequences along two dimensions of the tensor, a method depicted in DPRNN Figure 1. Similar approaches have been adopted in subsequent mentioned works like DPTNet, TF-Gridnet, and RTFS-Net. However, our CM$^2$ module does not utilize this segmentation method.\\n\\nFinally, we appreciate your valuable feedback on our work. We hope the explanations provided could resolve any misunderstandings. Should you have any further inquiries or require additional clarification, please do not hesitate to contact us. We are fully willing to address any further concerns you might have and look forward to your response. Additionally, we kindly request you revising the wording of the original review to remove some hurtful wordings such as \\\"plagiarism\\\".\", \"title\": \"Response to Reviewer 6pDV for Research Integrity Issues\\uff083/3\\uff09\"}",
"{\"title\": \"Response to Reviewer 6pDV for Research Integrity Issues\\uff081/3\\uff09\", \"comment\": \"Dear Reviewer 6pDV,\\n\\nThank you for your recognition, suggestions, and evaluation of our work. Regarding the research integrity issues you raised, we must clarify that the contributions we claimed are entirely different from those in the mentioned works like RTFS-Net. \\n\\nFirstly, you mentioned that our Figure 2 looks similar to RTFS-Net and there is a mount of overlap between the contributions of our paper and the contributions in RTFS-Net and TF-Gridnet. We believe this is a misunderstanding. In AVSE / AVSS, the pipeline of \\u201caudio / visual encoders \\u2192 audio-visual fusion module \\u2192 (Time-Frequency) enhancer/separator \\u2192 decoder\\u201d is quite common, as seen in OVA in 2020 (https://ieeexplore.ieee.org/document/9053033), Muse in 2021 (https://arxiv.org/abs/2010.07775), and VisualVoice in 2021 (https://arxiv.org/abs/2101.03149). This commonality in the overall pipeline frameworks may cause superficial similarities between different works, especially when using relatively generalized diagrams for overview representation. However, this process itself is common, and also not a contribution we claimed. Considering the generic nature of this pipeline, it is improper to deem our work as potentially plagiaristic which is such a hurtful term. Moreover, the issues addressed by RTFS-Net and TF-Gridnet are entirely different from our paper (speech separation aims to separate different speakers, while speech enhancement aims to enhance target speech from noisy input). Besides this point: \\n1. The first contribution we claimed is that we propose a new perspective to solve AVSE. This perspective draws inspiration from cognitive studies on human auditory processing in noisy environments. Drawing inspiration from how humans use semantic insights to compensate for noise-affected segments, we align this approach with the objective of AVSE\\u2014to enhance the output speech signals. Based on this general idea, we extract semantic-level information at first, which then guides the subsequent modeling of speech at the signal level so as to lead to an enhanced, high-quality speech signal at the end. This general idea has not been previously proposed. RTFS-Net's main contribution is a lightweight model that performs AVSS in the time-frequency domain with minimal resource consumption, achieving impressive performance. After carefully reviewing the RTFS-Net\\uff0cwe acknowledge it as an important and influential work in the field of speech separation. We are also pleased to include RTFS-Net and other significant speech separation studies mentioned in your comment. However, we must clarify that our contributions are entirely distinct from these works.\\n2. Our second contribution specifically emphasizes the importance of visual information for modeling the frequency characteristics of audio information\\u2014an aspect previously neglected in works like RTFS-Net and other AVSE/AVSS approaches. Earlier time-frequency domain AVSE/AVSS methods, including RTFS-Net, focused solely on aligning audio-visual features in the time dimension. They typically repeated temporal visual features along the audio frequency dimension merely to make the visual feature\\u2019s dimension equal to audio feature\\u2019s. In contrast, we noticed the role of visual information for recovering audio signals and intentionally introduced simple learnable upsampling of the audio-visual semantic context along the frequency dimension. This simple learnable manner aims to maximize the influence of visual data on every frequency bins of the audio spectrum.\\n3. Finally, before we discuss the specific module comparisons, it is important to note that our development process was guided by a new perspective and particular objectives. Each module in our system is introduced with unique objectives, distinct from RTFS-Net or any other systems.\"}",
"{\"title\": \"Response to Reviewer 6pDV (1/2)\", \"comment\": \"Dear Reviewer 6pDV,\\n\\n1. **Regarding your statement that \\u2018Specifically, in line 292, the authors state, \\\"we introduce a Channel Swapping Block (CSBlock).\\\"\\u2019**\\n\\nWould you mind kindly quoting the full sentence, including the preceding and following parts in our paper? The complete sentence in this part reads: \\n > 'Given the distinct emphasis of channel information within the two modalities, channels that are redundant in one modality can provide complementary benefits to the other. Based on this idea, we introduce a Channel Swapping Block (CSBlock) to preliminarily merge information across the modalities at the channel level for later fine-grained fusion on time or frequency dimensions.'\\n \\nThe sentence before states the rationale for introducing the CSBlock, emphasizing our assumption about how the two modalities interact in the channel dimension. The second half of the sentence explains the purpose behind our introduction of the block. The core operation of DPRNN is to solve the ineffectiveness of conventional RNNs in tackling long sequences, by splitting long sequences into smaller chunks and then interleaving two RNNs, an intra-chunk RNN and an inter-chunk RNN, for local and global modeling respectively. As we have emphasized in both the original manuscript and our previous responses, both the motivation and design of our CSBlock are completely different from the operation in DPRNN. Neither previous AVSE nor AVSS methods employ the same approach.\\n\\n**Code-level:** In the view of specific implementation, we have reviewed the DPRNN code according to the provided link and provide a line-wise comparison between the relevant DPRNN code and our implementation (Formulas 6 and 7). We truly cannot see any relationship between the DPRNN operation and our CSBlock. \\n\\n **DPRNN Source Code:**\\n```py\\n309 B, N, K, L = x.size()\\n310 output = x # for skip connection\\n311 # Intra-chunk processing\\n312 x = x.transpose(1, -1).reshape(B * L, K, N)\\n313 x = self.intra_RNN(x)\\n314 x = self.intra_linear(x)\\n315 x = x.reshape(B, L, K, N).transpose(1, -1)\\n316 x = self.intra_norm(x)\\n317 output = output + x\\n318 # Inter-chunk processing\\n319 x = output.transpose(1, 2).transpose(2, -1).reshape(B * K, L, N)\\n320 x = self.inter_RNN(x)\\n321 x = self.inter_linear(x)\\n322 x = x.reshape(B, K, L, N).transpose(1, -1).transpose(2, -1).contiguous()\\n323 x = self.inter_norm(x)\\n```\\n**Our Code:**\\n```python\\n161 audio_first_half = audio[:,//2,...]\\n162 video_first_half = video[:,//2,...]\\n163 audio_swap = torch.cat([audio_first_half, video[:,C//2:,...]], dim=1)\\n164 video_swap = torch.cat([video_first_half, audio[:,C//2:,...]], dim=1)\\n```\\n\\n **Equation:** The formulas (23) and (24) from RTFS-Net are to split the audio feature channels into real and imaginary parts, and then calculate the target speech using a form of complex multiplication. While both RTFS-Net method and ours involve splitting the feature channels into two parts, the key point lies in how these parts are used for what purpose. Our goal is to exchange the different channel information between two different modalities based on the redundancy and complementary assumption described above, which is fundamentally different in both usage and purpose compared to RTFS-Net. It is not reasonable to assume that they are similar just because both involve splitting the data into halves.\\n\\n2. **Regarding the overall pipeline of CM$^2$:**\", \"we_have_already_partially_addressed_this_issue_in_our_previous_responses\": \"> 'In AVSE / AVSS, the pipeline of \\u201caudio / visual encoders \\u2192 audio-visual fusion module \\u2192 (Time-Frequency) enhancer/separator \\u2192 decoder\\u201d is quite common, as seen in OVA in 2020 (https://ieeexplore.ieee.org/document/9053033), Muse in 2021 (https://arxiv.org/abs/2010.07775), and VisualVoice in 2021 (https://arxiv.org/abs/2101.03149). This commonality in the overall pipeline frameworks may cause superficial similarities between different works, especially when using relatively generalized diagrams for overview representation. This commonality in the overall pipeline frameworks may cause superficial similarities between different works, especially when using relatively generalized diagrams for overview representation. However, this process itself is common, and also not a contribution we claimed.'\\n\\nThe works we cited above have already been referenced in our manuscript. Additionally, all the mentioned works in the reviewer\\u2019s comment, RTFS-Net, TF-GridNet, and CTCNet share a similar overall pipeline to ours, which means all these works themselves share a similar pipeline. Then does that mean all these works have ethic issues? Clearly, no, because the overall pipeline is not the main claimed contribution of any of these works, nor is it our claimed contribution.\"}",
"{\"summary\": \"This paper presents a framework called Cross-Modal Contextual Modeling (CM2) to improve Audio-Visual Speech Enhancement (AVSE). CM2 integrates two types of contextual information\\u2014semantic and signal context\\u2014across audio and visual data to enhance speech quality in noisy environments. The semantic context helps the model infer missing or corrupted speech by maintaining consistency across segments, while the signal context leverages coherence within signal frames. Additionally, the approach emphasizes the importance of visual features, such as the speaker's facial cues, which can correlate with audio frequency characteristics, thus aiding the enhancement process.\", \"the_model_uses_three_main_components_to_build_and_integrate_these_contexts\": \"a Semantic Context Module (SeCM) for initial contextual extraction, a Signal Context Module (SiCM) for signal-level context from noisy inputs, and a Cross-Context Fusion Module (CCFM) to combine these contexts. This architecture allows for detailed context fusion across different modalities, effectively improving speech clarity, especially in challenging low signal-to-noise ratios. Experimental results show that CM2 outperforms other state-of-the-art models, demonstrating substantial gains in metrics related to speech quality and intelligibility.\", \"methodology\": [\"SeCM: Processes Video and Audio in Time Domain audio stream, proposes three solutions (V, PV, PAV) from other papers, and test to find the best approach.\", \"Audio Encoder: Processes audio in TFM Domain, uses 4 stacked convolutions.\", \"SiCM: uses a bidirectional Mamba approach to process along different dimensions and time directions.\", \"Time-Frequency Upsampler: two transpose convolutions with BN and prelu. used to take the frequency dim from 1 to match the frequency dim of the other audio route.\", \"CSBlock: split across the channels and concatenate cross-modality information\", \"AF Block - Generate attention map by applying linear to P, use two SiMC on P^c and E^c, then multiply the mask by both features and they are added together. A 2d conv binds the information together.\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The introduction and literature view show a strong and detailed narrative of their task, their goals and their contributions. It is well researched and has a comprehensive view of current methods, contextualising their methodology and results. Their methods section is long and detailed, with many interesting approaches to the problems that they face. They combine a series of current techniques, such as Mamba and individually processing the time and frequency dimensions, and then applying a global operation down both dimensions. They use a discriminator to add another loss signal to the training process, and in their experiments they cover a range of datasets and evaluation metrics, strengthening their claims. This show good scientific rigour.\", \"weaknesses\": [\"The weaknesses mainly come from technical writing problems and lack of clarity.\", \"SiCM is defined, but it is not clear how it is used. This section appears to define the equations twice, which is unnecessary. Its inputs are not defined - what is $Q_{in}$ and how does it connect with the rest of the model? Use of mamba is interesting, bidirectional seems unusual - and they do not have experiments to back up this choice.\", \"CM_2:\", \"Line 278 - should the notation be $B F_x \\\\times T_x \\\\times C$ instead?\", \"Line 283 \\\"diverging\\\" is repeated.\", \"Line 301 the $F_e$ and $F_p$ are not clearly defined (1d convolutions?).\", \"On equastions 6 and 7 add some spaces after the \\\"...\\\" and after the \\\",\\\" otherwise its too hard to read\", \"Line 302: it should be \\\"contains\\\" not \\\"contain\\\"\", \"3.5 the use of the SiCM notation is a little confusing. Earlier, it refers to a specific set of Mamba-based operations, but here it is for the entire frequency processing and time processing modules. Please keep the nomenclature consistent.\", \"The Discriminator is mentioned but undefined. Papers need to be reproducible.\", \"The methods they compared to in the results tables are quite old. Would be interesting to compare with modern AVSS models by setting speaker 2 = noise.\", \"Finally, while this is not a strict requirement, open sourcing the code after the paper's release would help understand the methods better, as the methods are quite convoluted.\"], \"questions\": [\"AF Block:\", \"Equation 11 is an interesting operation. I would expect something like $MI + (1-M)I$ if $M$ were a Mask. Could you provide some justification/insight into this operation, such as motivation/related work?\", \"\\\"Attention\\\" usually refers to a specific set of operations. To make this block resemble attention, you could apply two SiCMs to $P^c$ (to make a $K$ and $V$), then one SiCM to $E^c$ (to get $Q$), and then create an attention map by applying cross attention with $Q$, $K$ and $V$. Of course, this would be quadratic in $T_x$, so this may be computationally prehibative. Would it be possible to explore alternative operations?\"], \"experiments\": [\"Would it be possible to add Si-SNR(i) and SDR(i) metrics to the results?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": [\"Figure 2 looks very similar to RTFS-Net (https://arxiv.org/abs/2309.17189). There is some overlap between the methods introduced in this paper, and the contributions detailed in RTFS-Net and TF-GridNet (https://arxiv.org/abs/2209.03952). While the finer details differ, the overall approach is very similar. Please add citations.\", \"CSBlock: This method of switching the dimensions was first introduced by DPRNN in 2019. Other's work should be properly sited and referenced. This method has been used every single year since 2019, such as by DPTNet in 2020, TF-Gridnet in 2022, and RTFS-Net in 2023.\"], \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
ENv1CeTwxc | Segment Any 3D Object with Language | [
"Seungjun Lee",
"Yuyang Zhao",
"Gim Hee Lee"
] | In this paper, we investigate Open-Vocabulary 3D Instance Segmentation (OV-3DIS) with free-form language instructions. Earlier works mainly rely on annotated base categories for training which leads to limited generalization to unseen novel categories. To mitigate the poor generalizability to novel categories, recent works generate class-agnostic masks or projecting generalized masks from 2D to 3D, subsequently classifying them with the assistance of 2D foundation model. However, these works often disregard semantic information in the mask generation, leading to sub-optimal performance. Instead, generating generalizable but semantic-aware masks directly from 3D point clouds would result in superior outcomes. To the end, we introduce Segment any 3D Object with LanguagE ($\textbf{SOLE}$), which is a semantic and geometric-aware visual-language learning framework with strong generalizability by generating semantic-related masks directly from 3D point clouds. Specifically, we propose a multimodal fusion network to incorporate multimodal semantics in both backbone and decoder. In addition, to align the 3D segmentation model with various language instructions and enhance the mask quality, we introduce three types of multimodal associations as supervision. Our SOLE outperforms previous methods by a large margin on ScanNetv2, ScanNet200, and Replica benchmarks, and the results are even closed to the fully-supervised counterpart despite the absence of class annotations in the training. Furthermore, extensive qualitative results demonstrate the versatility of our SOLE to language instructions. The code will be made publicly available. | [
"Open-set",
"3D Instance Segmentation",
"Multimodal"
] | Accept (Poster) | https://openreview.net/pdf?id=ENv1CeTwxc | https://openreview.net/forum?id=ENv1CeTwxc | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vijyuVdAFe",
"mWVFpCgRll",
"grgXvnsxuv",
"eJX4AvGCkK",
"DS8V4IWqR3",
"2zavQMe3BX"
],
"note_type": [
"official_review",
"official_review",
"meta_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1730538095680,
1730706510167,
1734891499774,
1737523624426,
1730373781446,
1730867915823
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission4198/Reviewer_7Fsw"
],
[
"ICLR.cc/2025/Conference/Submission4198/Reviewer_SCdm"
],
[
"ICLR.cc/2025/Conference/Submission4198/Area_Chair_oVwe"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission4198/Reviewer_bJcr"
],
[
"ICLR.cc/2025/Conference/Submission4198/Reviewer_yiYW"
]
],
"structured_content_str": [
"{\"summary\": \"This work focuses on open-vocabulary 3D instance segmentation with language instructions. To fully utilize the semantic information in the generated mask, the authors propose a framework, named Segment any 3D Object with LanguagE (SOLE). Specifically, they introduce a feature ensemble to fuse features from 3D backbone and 2D CLIP. Then, they propose a cross-modality decoder (CMD) to integrate the point-wise features and textual features. Last, the authors propose three types of association to align the mask feature with language instruction. Experimental results on ScanNetv2, Scannet200, and Replica show that the proposed method outperforms existing methods including OpenIns3D and OpenMask3D. However, I still have some questions about the methodology and experiments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed feature ensemble module is effective in fusing features from both the backbone and CLIP.\\n2. The proposed MEA effectively improves the model performance by introducing fine-grained visual-language association.\\n3. The proposed SOLE shows superior performance on ScanNetv2 and ScanNet200 compared to both OpenIns3D and OpenMask3D.\", \"weaknesses\": \"1. In Table 5, it is unclear whether the proposed CMD is effective under a voxel size of 4cm. It would be better to provide the results that remove the CMD under a voxel size of 4cm and compare with the results at the 3-th row.\\n2. Is the proposed method sensitive to the hyperparameters $\\\\lambda_{MMA}, \\\\lambda_{dice}, \\\\lambda_{BCE}$\\u201d? More discussions are required.\\n3. In Tables 1-4, it is unclear why the authors ignore the results of Open3DIS with both 2D and 3D supervision. More discussions are required.\\n4. In Eq(4), why not use cosine similarity to measure the distance between the mask features and the associate features? Note that using cosine similarity is common practice to measure the distance between features, and is used by OpenSeg and CLIP. Besides, on line 341, $p(\\\\cdot, \\\\cdot)$ should be $p(\\\\cdot)$.\\n5. As the three types of associate features lie in different feature spaces, it is difficult for a model to learn the mask features that aggregate all the advantages of associate features by Eq(4). The results in Table 6 also show that using all three types of association may not be the best choice. Why not use separate mask features for each associate feature and concatenate them together? \\n6. On line 457, the authors argue that \\u201csmall voxel size can save the memory requirements\\u201d. In fact, using larger voxel size can reduce the number of voxels and is more memory efficient.\\n7. Some of the references need to be updated. For example, Open3DIS is currently published in CVPR 2024.\", \"questions\": \"My main concerns are about the method and experiments. Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a method for segment any 3D object with language, which tries to improve the generalizability to novel categories. The suggested method is a semantic and geometric-aware visual-language learning framework. A multimodal fusion network is given to incorporate multimodal semantics, and three types of multimodal associations as supervision are introduced to align the 3D segmentation model with various language instructions and enhance the mask quality. Overall, the suggested method outperforms previous methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The problem studied is interesting and has broad application prospects, particularly in the interactive understanding of 3D scenes.\\n\\nThis paper is well- presentation with a clear motivation, and sufficient detail provided in the methods, making it easy to follow.\\n\\nThe proposed method offers a new perspective, which is to directly predict semantic-related masks from 3D point clouds with multimodal information.\", \"weaknesses\": \"What do the five parts in the Feature Backbone represent? Although not a contribution of this paper, it would be best to clearly illustrate this for better self-containment.\\n\\nThe transition from class-agnostic to class-aware is achieved by introducing point-wise CLIP features. However, converting 3D point clouds to images for point-wise CLIP feature extraction and then projecting the features back to 3D seems to have significant computational overhead.\", \"questions\": \"CLIP demonstrates strong generalization at the image level. How can 2D frame features be accurately transferred to point-wise features?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper introduces a method named SOLE, which segments any 3D object with language, and tries to improve the generalizability to novel categories. The manuscript was reviewed by four experts in the field. The recommendations are (3 x \\\"6: marginally above the acceptance threshold\\\", \\\"8: accept, good paper\\\"). Based on the reviewers' feedback, the decision is to recommend the acceptance of the paper. The reviewers did raise some valuable concerns (especially additional and important experimental evaluations and ablation studied, needed comparisons with previous literature (clarification regarding technical insights), and together with further polishment of the manuscript) that should be addressed in the final camera-ready version of the paper. The authors are encouraged to make the necessary changes to the best of their ability.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers mainly hold concerns regarding extra additional and important experimental evaluations and ablation studies (Reviewer yiYW, 7Fsw, bJcr), needed comparisons with previous literature (Reviewer bJcr), detailed clarification on statements (Reviewer yiYW, SCdm, 7Fsw, bJcr), and together with further polishment of the manuscript (Reviewer 7Fsw, bJcr). The authors address these concerns with detailed and extra experiments and commit to polishing the revised version further.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"The authors present SOLE, an approach to Open-Vocabulary 3D Instance Segmentation that uses a training-based pipeline. Basically, it proposes a new method to fuse CLIP features earlier in the network to produce more text-aware masks in testing. Particularly, to address the limitation of overfitting to seen class names in training, SOLE presents a new semantic-aware mask generation module, which integrates rich textual semantics through multimodal embeddings derived from CLIP and captioning models, allowing the model to effectively capture language-domain features. The framework introduces three multimodal association techniques (MVA, MCA, and MEA) to align 3D segmentation with language instructions, enhancing mask quality. SOLE achieves state-of-the-art performance on the ScanNetv2, ScanNet200, and Replica benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Multimodal Fusion Network: SOLE\\u2019s multimodal fusion network integrates visual and language semantics, enhancing segmentation quality and generalizability.\", \"Alignment with Language Instructions: The model introduces three multimodal associations to align 3D segmentation with language instructions effectively.\"], \"weaknesses\": \"+ Still requires images in testing to extract per-point CLIP features, slowing down the whole process compared to the baselines that do not associate semantic features from text.\\n+ Although this paper does not use class names, they are already available, and much easier to annotate than the ground-truth 3D masks. Leveraging this information or not is up to the approaches. This cannot be claimed as an advantage. \\n+ Domain Adaptation Testing on ScanNet++: To assess SOLE\\u2019s generalizability, testing its domain adaptation performance on ScanNet++ [a] with over 1,500 class-agnostic categories and an open-vocabulary subset of 100 classes would be valuable. The unique point distribution in this dataset could further validate SOLE\\u2019s robustness, and strong performance here would mark a significant achievement.\\n+ Expanding Multimodal Associations with Large Language Models: The multimodal associations module\\u2019s versatility opens up opportunities to incorporate large language models beyond DeCap [b], including 3D-LLM [c], Ferret [d], OSPrey [e], and others. \\n\\n[a] Yeshwanth, Chandan, et al. \\\"Scannet++: A high-fidelity dataset of 3d indoor scenes.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[b] Li, Wei, et al. \\\"Decap: Decoding clip latents for zero-shot captioning via text-only training.\\\" arXiv preprint arXiv:2303.03032 (2023).\\n\\n[c] Hong, Yining, et al. \\\"3d-llm: Injecting the 3d world into large language models.\\\" Advances in Neural Information Processing Systems 36 (2023): 20482-20494.\\n\\n[d] You, Haoxuan, et al. \\\"Ferret: Refer and ground anything anywhere at any granularity.\\\" arXiv preprint arXiv:2310.07704 (2023).\\n\\n[e] Yuan, Yuqian, et al. \\\"Osprey: Pixel understanding with visual instruction tuning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"questions\": \"See Weaknesses section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"Unlike previous methods that decouple mask proposal and open-world semantic prediction, this paper proposes the SOLE framework, which unifies these two processes into a single mask proposal framework by incorporating point-wise CLIP features into the mask proposal stage. SOLE achieves state-of-the-art results on several open-world instance segmentation benchmarks, such as ScanNetv2 and ScanNet200.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. **[Reasonable Design]** Unlike the previous paradigm, which separates \\\"mask proposal\\\" and \\\"mask understanding\\\", this framework combines these two processes. This integrated approach seems reasonable, as the inclusion of additional features from an image foundation model is likely to improve mask proposal quality and, consequently, overall performance.\\n\\n2. **[Good Results]** The reasonable design leads to improved accuracy compared to previous literature. The paper validates its performance on a sufficient number of benchmarks.\", \"weaknesses\": \"1. **[Potential Efficiency Issue]** A major concern relates to the original SOLE\\u2019s efficiency, particularly in terms of speed and memory consumption. Aggregating raw 2D images into a 3D point cloud is likely to be slow. Even if this process is considered a preprocessing step, the loading and processing of per-point CLIP features could also be extremely resource-intensive. This may pose a limitation for real-world applications.\\n\\n2. **[Evaluation of Efficiency]** Building on the previous point, the manuscript lacks a detailed breakdown of model speed and memory consumption. This is a crucial metric for real-world applications and the potential scalability of this method. Identifying which components contribute to inefficiency would be valuable for future research. Although some numbers are provided in the appendix, this analysis is important, even if the results are not entirely favorable.\\n\\n3. **[Reliance on Over-segmentation GT]** Inherited from Mask3D, the proposed method also relies on graph-based over-segmentation results, which are used for ground truth labeling in ScanNet and ScanNet200. This reliance may introduce certain issues (though originally introduced by Mask3D).\", \"questions\": \"For detailed questions, please refer to the weaknesses section. Regardless, I think this work is a good addition to the existing progress in 3D open-world instance segmentation. I hope to see further justification related to model efficiency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
ENVwvyiJXY | Dataset Distillation for Domain Generalization | [
"Youngseok Yoon",
"Suha Kwak"
] | Dataset Distillation (DD) has been applied to various downstream tasks and recently scaled to ImageNet-1k, highlighting its potential for practical applications. However, in real-world scenarios, robustness to unseen domains is essential, and the robustness of models trained on synthetic datasets remains uncertain. To address this, we propose a novel task, Dataset Distillation for Domain Generalization (DD for DG), and evaluate the unseen domain generalization of models trained on synthetic datasets distilled by state-of-the-art DD methods using the DomainBed benchmark. Additionally, we introduce a new method for this task, which interprets DD through the lens of image style transfer, achieving superior performance in unseen domain generalization compared to baseline approaches. | [
"Dataset Distillation",
"Domain Generalization",
"Style Transfer",
"Self-Supervised Learning"
] | https://openreview.net/pdf?id=ENVwvyiJXY | https://openreview.net/forum?id=ENVwvyiJXY | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"qQJznKGIzg",
"dHnb5q9tyw",
"X2kOCxqbCZ",
"R6wxCkCkwW",
"LJbaTbFjLD"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1730648643426,
1730652041495,
1730522921695,
1731583553448,
1730704957014
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8996/Reviewer_JNQZ"
],
[
"ICLR.cc/2025/Conference/Submission8996/Reviewer_ewnr"
],
[
"ICLR.cc/2025/Conference/Submission8996/Reviewer_oiLy"
],
[
"ICLR.cc/2025/Conference/Submission8996/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8996/Reviewer_Nrng"
]
],
"structured_content_str": [
"{\"summary\": \"This paper introduces novel problem 'Dataset Distillation for Domain Generalisation', which aim to increase the generalisation ability of the model trained on distilled dataset. This new framework is becoming important as the importance of dataset distillation is emerging in recent years. To resolve the issues, the authors provide two main component (1) Domain Transfer Learning (2) Domain Style Mixing, which were inspired by the similarity between dataset distillation process and domain generalisation process. The proposed method increases the generalisation performance on multiple benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The entire problem this paper proposes is quite compelling and yet under explored. Since the dataset distillation technique is gathering attention of ML community for various reasons, it is natural to explore the effects of distilled dataset on final model with various perspectives.\\n\\n2. The paper is well organised and well written. Especially, the authors well introduce the field of dataset distillation.\\n\\n3. The authors tried to validate the proposed method across various architectures and dataset.\", \"weaknesses\": \"1. Lack of validation on large-scale dataset. While the authors presents the experiments with various dataset, most of them are small-size dataset. I believe there are larger dataset for DG tasks, e.g., DomainNet [r1]. While the dataset distillation methods might works not well on large scale dataset, the authors presented that they are now scaled up to ImageNet-1K level. Hence I think the authors should at least validate their method on DomainNet scale, which is quiet smaller than ImageNet but larger than other datasets in the paper.\\n\\n2. Lack of comparison with existing method. While the authors tries to compare their method with existing dataset distillation method, I can not fine the comparison with existing DG methods. A lot of existing DG methods can be naively incorporated into Dataset Distillation method, e.g., Distill dataset first, then use DG methods when they train network with distilled dataset. I suggest the authors to provide these kind of comparison with existing DG methods. \\n\\n3. Necessity of domain label. The authors utilise domain labels across their entire process. However, explicitly dividing dataset into separate domains might be impossible in real world environments. Even if it is possible, it requires human expertise, which is quite expensive. I wonder if the proposed method can be used without domain label. \\n\\n4. Lack of motivation/clarification on DSM. While DSM module is one of main component and maybe boosts the performance, I can not find the motivation related to this paper. Is there any motivation of DSM related to the entire contribution? Also, in the DSM process, do we interpolate the 'weights' of style transfer network? I am asking this because it looks like so, which is quite weird, according to the line362. Also I think the authors should provide comparison results with other kinds of style manipulation methods since there are bunch of style manipulation methods in DG fields, for example StyleMix [r2].\\n\\n\\n[r1] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In ICCV, 2019 \\n\\n[r2] KaiyangZhou,YongxinYang,YuQiao,andTaoXiang.Do- main generalization with mixstyle. In ICLR, 2021\", \"questions\": \"Please see the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper addresses the challenge of domain generalization (DG) by proposing a novel task called Dataset Distillation for Domain Generalization (DD for DG). The authors highlight the significance of robustness to unseen domains, especially when models are trained on synthetic datasets. They evaluate the unseen domain generalization of models using synthetic datasets distilled through state-of-the-art Dataset Distillation (DD) methods, employing the DomainBed benchmark. A new method is introduced that interprets DD in the context of image style transfer, demonstrating improved performance in generalization compared to existing baseline approaches. The study provides a comprehensive overview of how DD can be leveraged for effective domain generalization in practical applications.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of DD for DG represents a novel perspective in the intersection of dataset distillation and domain generalization, addressing a significant gap in the literature.\\n2. The use of the DomainBed benchmark provides a rigorous framework for evaluating model performance across various unseen domains, ensuring that results are robust and generalizable.\\n3. The paper discusses the practical implications of using distilled datasets, which can potentially reduce the costs and complexities associated with large-scale data handling in real-world applications.\", \"weaknesses\": \"1. While the paper showcases improvements in specific benchmarks, it may not sufficiently explore how these methods perform across a wider range of datasets or real-world scenarios outside of the chosen benchmarks, such as auto-driving scenarios .\\n2. The efficacy of the proposed method heavily relies on the quality of the distilled datasets. If the initial synthetic datasets are flawed, the results may not hold.\\n3. The new method, while promising, may introduce additional complexity in practical applications, potentially limiting its adoption in less technical environments.\\n4. The approach may struggle with domain shifts that significantly differ from the training conditions. If the unseen domains introduce characteristics not represented in the synthetic datasets, the models may underperform.\\n5. Many of the figures and images included in the PDF have low resolution, making them difficult to read and interpret. This can hinder the reader's understanding of key concepts and results presented in the paper.\", \"questions\": \"1. How do the results compare when applying the proposed methods to diverse datasets not included in the DomainBed benchmark?\\n2. What specific challenges were encountered in the implementation of the new method, and how were they addressed?\\n3. Could the findings suggest any adjustments to current practices in dataset generation or augmentation to further enhance model robustness?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": [\"This paper presents a new task, called dataset distillation (DD) for domain generalization (DG). As the name suggests, the goal is to distill multiple source domains via dataset distillation so that the model trained on the distilled dataset exhibits domain generalization properties (good performance on unseen domains). This is a challenge because simply applying dataset distillation to the combined source domains seems to be ineffective.\", \"The paper proposes domain transfer learning and domain style mixing as the main approach for dataset distillation in DG. Domain transfer learning aims to create the synthetic dataset by matching the style features of different domains, while domain style mixing enhances this by mixing the style features of different domains.\", \"Experiments are conducted on a subset of the DomainBed benchmark, suing VLCS, PACS, Office-H, and TerraInc.\"], \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The results in Table 1 are show that there is a difficulty in directly applying well-known Dataset Distillation techniques directly to the multi-source DG setting. It clearly shows that naively applying DD across domains does not generate synthetics datasets that can be useful for DG purposes.\\n2. The authors make an effort to focus on larger scale dataset settings (although I do not agree with what they call \\\"large scale\\\", which will be addressed under weaknesses)\\n3. The proposed method, especially under low IPC settings, shows some promise.\", \"weaknesses\": \"Weaknesses are written in no particular order.\\n1. Figure 1 does not serve much purpose. The only reference of it is in L191, and the caption to support it is short. It's difficult to decipher as a stand-alone figure, but no reference is made in the text (except for \\\"the overall architecture of the proposed approach is shown in Figure 1\\\") so it doesn't help in understand what the proposed method is doing.\\n\\n2. Using \\\"style\\\" features for domain adaptation or generalization is not a novel concept. It could be argued that using this concept is novel for dataset distillation, but given that the proposed task is a direct combination of DD and DG, I am not convinced that this can be considered as a significant contribution. Minimizing style discrepancies across source datasets to make the model invariant to style variations has been often used for DG. Also, style mixing has been used in many DG methods as well (for example, in \\\"Domain Generalization with MixStyle\\\", as cited by this paper). \\n\\n3. While the authors claim \\\"small-scale datasets, ..., are not considered in this work as the dataset distillation task is mainly focused on large-scale dataset\\\" (L372), I cannot agree that this paper is conducting experiments on \\\"large-scale\\\" datasets. Let's look at the tested datasets: VLCS, PACS, Office-H, TerraInc. All four datasets use training image size of 224x224 (yes this may be considered as \\\"large image size\\\" compared to MNIST or CIFAR), but they all contain less than 1000 samples. Is this really considered \\\"large-scale\\\"? In contrast, when we talk about dataset distillation scaling to large-scale datasets, we would like dataset distillation to work on ImageNet, which has ~1.2million samples. I'm not really convinced that we need dataset distillation for datasets with <1000 samples. It is also weird that **DomainNet** has been left out from DomainBed, given that DomainNet is the largest dataset in DomainBed.\\n\\n4. The writing of this paper is quite hard to follow. There are a lot of grammatical errors and typos. Also, some sentences/paragraphs are incoherent or run-on sentences (e.g., L205~207), or just don't make sense (L266: \\\"we set the target style of the synthetic images as the sum of the MSE loss\\\"). Algorithm 1 also seems unnecessary; it gives 4 nested loops followed by two lines of text, which could be simplified to a few sentences in the text itself. Overall, the paper feels very rushed.\\n a. Some typos I found: L271, L288, L379, and many more\\n\\n5. Table 2 should also show results using the original (non-distilled) dataset as a reference to how effective the dataset distillation is. Same goes with Table 3 and Table 1\", \"questions\": [\"Why was DomainNet omitted from DomainBed? It is the largest dataset in this benchmark.\", \"Why does the proposed method show more improvement under low IPC settings, compared to higher IPC settings?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper considers the dataset distillation problem in the context of domain generalization, i.e., generate synthetic datasets which enables to train a domain-generalizable model. The authors propose using domain transfer learning and domain style mixing to solve this problem. Experiments verify the effectiveness of proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The setting this paper considered is new and meaningful.\", \"The general idea is reasonable.\", \"Empirical results are good.\"], \"weaknesses\": \"- The paper is not clearly written. Some details are missing or confusing to me. For example, the definition of $\\\\mathcal{L}_{ce}(S)$ in Eq. (1) or Eq. (3) seems to be inconsistent with that in Eq. (10).\\n\\n How do we achieve $\\\\phi(x)$ in Eq. (10)? \\n\\n What parameters are updated using Eq. (12)? \\n\\n How do you update $S_{k,m}$ in Algorithm1? \\n\\n In Figure 3, both (a) and (b) means the normalized domain-specific style loss. What is the difference between (a) and (b)? \\n\\n I see in Algorithm 1, $\\\\phi$ is also output. So what will it used for? \\n\\n- Some typo exist. For example, in Eq. (8), I think the subscript should be $S$ instead of $\\\\tilde{S}$. \\n\\n- For the visualization, do you have any insights? I cannot relate the performance with the visualizations.\", \"questions\": \"Please refer to the weakness part. The authors should clarify all the confusing points and provide necessary details to help the readers understand their method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EMpvfnzQqD | OTTC: A differentiable alignment approach to automatic speech recognition | [
"Yacouba Kaloga",
"Shashi Kumar",
"Petr Motlicek",
"Ina Kodrasi"
] | The Connectionist Temporal Classification (CTC) and transducer-based models are widely used for end-to-end (E2E) automatic speech recognition (ASR). These methods maximize the marginal probability over all valid alignments within the probability lattice over the vocabulary during training. However, research has shown that most alignments are highly improbable, with the model often concentrating on a limited set, undermining the purpose of considering all possible alignments. In this paper, we propose a novel differentiable alignment framework based on a one-dimensional optimal transport formulation, enabling the model to learn a single alignment and perform ASR in an E2E manner.
We define a pseudo-metric, called Sequence Optimal Transport Distance (SOTD), over the sequence space and highlight its theoretical properties.
Based on the SOTD, we propose Optimal Temporal Transport Classification (OTTC) loss for ASR and contrast its behavior with that of CTC.
Experimental results on the English Librispeech and AMI datasets demonstrate that our method achieves competitive performance compared to CTC in ASR.
We believe this work opens up a potential new direction for research in ASR, offering a foundation for the community to further explore and build upon. | [
"ASR",
"Optimal Transport",
"Sequence to Sequence",
"Alignment"
] | Reject | https://openreview.net/pdf?id=EMpvfnzQqD | https://openreview.net/forum?id=EMpvfnzQqD | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"ypqXFIqPcQ",
"skKi32LI0Z",
"sa0BFzvzwM",
"n2xDg9rJTK",
"mRpS909pw1",
"gbnBWWUBjT",
"fAJZR6JsVf",
"cM72bOYrZD",
"Z9vPreRlcB",
"Yp7YMUHYlt",
"UygAci5n1K",
"QsdrCVL5mk",
"Pac8FzEMek",
"OxLcvPWygB",
"OVYZA1F6SQ",
"NHgBdQeWMW",
"JSuVESBEap",
"D6utNnvgg1",
"BSunB9Fjqv",
"8YiCn21cTD",
"8T38fjKx8Z",
"2KthDSK9NV",
"15PFQ5tmFX",
"0xhAjQUECr"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision"
],
"note_created": [
1734431070244,
1732113921042,
1732103520448,
1732505376721,
1732128462675,
1730603987755,
1732533808741,
1732542429291,
1732254124183,
1730638347888,
1733197191246,
1732114694968,
1732254545922,
1732885380438,
1732114867516,
1732555690953,
1732125313345,
1732884029495,
1732411516431,
1732683911456,
1732534669641,
1732532067149,
1730680267272,
1737524212559
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12744/Area_Chair_Lb1w"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Reviewer_wARR"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Reviewer_6GZM"
],
[
"ICLR.cc/2025/Conference/Submission12744/Reviewer_wARR"
],
[
"ICLR.cc/2025/Conference/Submission12744/Reviewer_wARR"
],
[
"ICLR.cc/2025/Conference/Submission12744/Reviewer_6GZM"
],
[
"ICLR.cc/2025/Conference/Submission12744/Reviewer_wARR"
],
[
"ICLR.cc/2025/Conference/Submission12744/Reviewer_bmks"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Reviewer_6GZM"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Reviewer_6GZM"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12744/Reviewer_bmks"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors introduce a novel end-to-end loss for automatic speech recognition (ASR) called Optimal Temporal Transport Classification (OTTC), which simultaneously learns temporal alignment and audio frame classification. This loss is derived from the proposed Sequence Optimal Transport Distance (SOTD) framework, which establishes pseudo-metrics over the sequence space. A key component of this framework is a parameterized, differentiable alignment model based on one-dimensional optimal transport, providing linear complexity in both time and space.\\n\\n The key strength of the work is its originality, since the technique question address the CTC \\\"peaky\\\" behavior is novel as indicated by the three independent reviewers. The key concern is the performance, which is behind both CTC and state-of-the-art. Only after the reviewers' requests for additional experiments the proposed technique after some twists and tricks reduces the gap with CTC. Furthermore, the tune parameter \\\"beta\\\" needs more investigations.\\n\\nOverall, it should be the authors' responsibility to demonstrate that the proposed approach can outperform the state-of-the-art within the submitted work, rather than leaving this to future research.\", \"additional_comments_on_reviewer_discussion\": \"The paper is well-written, which made the review process much easier for the reviewers. The discussion phase was engaging, with a good collaboration between the authors and the reviewers. All reviewers acknowledged the originality of the work, but they also noted issues with its current performance (lagging behind SOTA). Reviewer bmks placed greater emphasis on the originality, recommending the paper for acceptance. In contrast, Reviewer 6GZM focused more on the performance gap with the current state-of-the-art. Lastly, Reviewer wARR increased their rating during the discussion phase, but the score was increased only to 6 due to concerns about the current model's performance. It should be noted that even the most positive reviewer concluded their final response to the authors by stating, \\\"the challenge now is to beat the state of the art.\"}",
"{\"title\": \"Author Response to Reviewer 6GZM (1/3)\", \"comment\": \"We appreciate the reviewer\\u2019s valuable feedback and insightful comments. Below, we address each of the review points in detail.\\n\\n- **[W1]** *The main weakness of the paper is that the experimental results didn\\u2019t show the better ASR accuracy of the proposed method when compared with CTC loss.*\\n\\nAlthough the proposed OTTC loss currently lags behind CTC in terms of ASR performance, we would like to emphasize that the primary goal of this paper is to introduce a completely new framework for ASR. This framework offers end-to-end differentiability and addresses longstanding issues in existing approaches, such as the peaky behavior. The experiments serve mainly as proof of concept, demonstrating how the method works and its potential. With further work from the community, there is potential to develop a competitive and more cost-effective method based on this approach.\\n\\nWhile further testing the proposed OTTC framework after the paper submission deadline, we also experimented with different pre-trained models such as Wav2Vec2-large [1] instead of XLSR (as shown in the first submission). The obtained results depicted in the table below show that with such a pre-trained model, OTTC performance is very similar to that of CTC. These results further demonstrate the necessity to investigate the OTTC further and its potential of becoming a standard differentiable loss for ASR. \\n\\n| **Model** | **100h-LibriSpeech** | | | | **360h-LibriSpeech** | | | | **960h-LibriSpeech** | |\\n|-----------|----------------------|------------|------------|----------------------|------------|------------|----------------------|------------|\\n| | test-clean | test-other | | | test-clean | test-other | | | test-clean | test-other |\\n| **CTC** | 3.36 | 7.36 | | | 2.77 | 6.58 | | | 2.20 | 5.23 |\\n| **OTTC** | 3.77 | 8.55 | | | 3.00 | 7.44 | | | 2.52 | 6.16 |\\n\\n\\n[1] A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, \\u201cwav2vec 2.0: A framework for self-supervised learning of speech representations,\\u201d in *NeurIPS,* 2020.\\n\\n- **[W2]** *It\\u2019s claimed that the alignment from the OTTC model is better than that from CTC model because it\\u2019s not peaky. But the author didn\\u2019t compare the alignment with the ground truth alignment to measure the alignment accuracy with convincing numbers. Below are some detailed comments: \\u2022 Figure 4 and figure 5 show the alignment of OTTC model, but none of them show the \\u201cground truth\\u201d alignment. And the paper didn\\u2019t present measurements of the alignment accuracy from OTTC model in other way. So, it doesn't seem credible to claim that OTTC could get better alignment since it\\u2019s not compared with the ground truth alignment.*\\n\\nAnalyzing the alignment accuracy with concrete numbers is unfortunately not possible since one does not have access to the ground truth alignments (at least we are not aware of existing databases which also contain ground truth alignments). In ASR, we are only given the audio and the corresponding reference text, without a one-to-one alignment between the audio frames and text tokens. This is the reason why CTC and transducer models marginalize over all possible paths (alignments). In contrast, the proposed OTTC framework learns a single alignment, enabling end-to-end ASR while mitigating the peaky behavior commonly observed in CTC-based models.\\n\\nHowever, we agree with the reviewer that we should support our claim that the OTTC alignment avoids the peaky behavior observed in CTC, which is characterized by a significant proportion of audio frames being assigned to either the blank symbol or the space symbol (non-alphabet symbols) [2]. To this end, we calculated the average percentage of audio frames assigned to these two special symbols to quantitatively assess the model's peaky behavior. For the test-clean set, we found that **60.3%** of total frames in CTC models were assigned to these special symbols. In contrast, the OTTC model assigned only **22.9%** of frames to these symbols. This highlights the effectiveness of the alignment achieved by our proposed framework, which decisively avoids the extreme peaky behavior exhibited by CTC models.\\n\\n[2] Albert Zeyer, Ralf Schl \\u0308uter, and Hermann Ney, \\u201cWhy does CTC result in peaky behavior?,\\u201d *arXiv preprint arXiv:2105.14849*, 2021.\"}",
"{\"title\": \"Author Response to Reviewer wARR\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback and insightful comments, which have helped us improve the clarity and quality of our work. Below, we address each of the review points in detail.\\n\\n**[W1]** Most recent losses in ASR are variations and adaptations of CTC, including the transducers mentioned by the reviewer. This paper, however, was primarily intended to present a new proposal for accomplishing the ASR task entirely independent of CTC, enabling the community to test and build on this new idea. The experiments serve mainly as proof of concept, demonstrating how the method works and its potential. With further work from the community, there is potential to develop a competitive and more cost-effective method based on this approach.\\n\\nWhile further testing the proposed OTTC framework after the paper submission deadline, we also experimented with different pre-trained models such as Wav2Vec2-large [1] instead of XLSR (as shown in the first submission). The obtained results depicted in the table below show that with such a pre-trained model, OTTC performance is very similar to that of CTC. These results further demonstrate the necessity to investigate the OTTC further and its potential of becoming a standard differentiable loss for ASR.\\n\\n| **Model** | **100h-LibriSpeech** | | | | **360h-LibriSpeech** | | | | **960h-LibriSpeech** | |\\n|-----------|----------------------|------------|------------|----------------------|------------|------------|----------------------|------------|\\n| | test-clean | test-other | | | test-clean | test-other | | | test-clean | test-other |\\n| **CTC** | 3.36 | 7.36 | | | 2.77 | 6.58 | | | 2.20 | 5.23 |\\n| **OTTC** | 3.77 | 8.55 | | | 3.00 | 7.44 | | | 2.52 | 6.16 |\\n\\n\\n[1] A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, \\u201cwav2vec 2.0: A framework for self-supervised learning of speech representations,\\u201d in *NeurIPS,* 2020.\\n\\n**[W2]** Removing the weights prediction is equivalent to not searching for the best path, but instead maximizing the probability of the path where a frame is assigned to the same label during the course of training. In this case, we completely lose the localization of the letter in the audio, which is why we didn\\u2019t include such experiments in the paper.\\nFollowing the reviewer\\u2019s feedback, we conducted an experiment using the 360-hour LibriSpeech setup with Wav2Vec2-large as the pre-trained model, employing fixed and uniform OT weights. We observed a WER of 3.51 on test-clean (compared to 2.77 for CTC and 3.0 for OTTC with learnable OT weights) and 8.24 on test-other (compared to 6.58 for CTC and 7.44 for OTTC with learnable OT weights). From these results, we note that while the WERs are slightly degraded in comparison to OTTC with learnable OT weights, the localization is completely lost.\\n\\n**[Q1]** While we agree with the reviewer that the practical training cost is an important characteristic to analyze, unfortunately we cannot provide a fair comparison at the moment. In our current implementation, OTTC is slightly slower to train than CTC (approximately 60 samples per seconds vs 90 samples per seconds for CTC), because CTC is directly implemented and optimized in C++, while OT uses vectorisation from the torch library. However, when appropriately optimized, OT is expected to be faster than CTC.\\n\\n**[Q2]** The peaky behavior of CTC models is characterized by a significant proportion of audio frames being assigned to either the blank symbol or the space symbol (non-alphabet symbols) [2]. Following the reviewer\\u2019s feedback, we calculated the average percentage of audio frames assigned to these two special symbols to quantitatively assess the model's peaky behavior. For the test-clean set, we found that 60.3% of total frames in CTC models were assigned to these special symbols. In contrast, the OTTC model assigned only 22.9% of frames to these symbols. This highlights the effectiveness of the alignment achieved by our proposed framework, which decisively avoids the extreme peaky behavior exhibited by CTC models.\\n\\nWhile it would be indeed important to further validate our claim using ground-truth toke-time alignments, to the best of our knowledge, no databases with such ground-truth information are currently available.\\n\\n[2] Albert Zeyer, Ralf Schl \\u0308uter, and Hermann Ney, \\u201cWhy does CTC result in peaky behavior?,\\u201d *arXiv preprint arXiv:2105.14849*, 2021.\\n\\n**[Q3]** We appreciate the reviewer for highlighting the typos. We have addressed them, and the corrections will be reflected in the revised version of our paper.\"}",
"{\"title\": \"Thanks for the detailed reply!\", \"comment\": \"Thanks for the detailed reply! I still have a few concerns:\\n\\n1. I understand that removing the weights prediction could lead to a loss in localization, but I'm curious if the model could still be trained successfully without it? (Just a question for discussion, I don't require the need of an experiment here.)\\n\\n2. The performance gap between CTC and OTTC is still noticeable, even with Wav2Vec2-large, which is pretrained: (e.g., 2.20/5.23 vs. 2.52/6.16 with 960h-LibriSpeech) This raises concerns about the practical applicability of this method. Is there room for improvement?\\\"\"}",
"{\"title\": \"Author Response to Reviewer bmks (2/2)\", \"comment\": \"- **[Q5]** *L252, \\\"Sequences Optimal Transport Distance (SOTD)\\\": motivate this extension more? I.e., make it clear what is missing from the alignment model presented so far.*\\n\\nThe alignment introduced so far is a differentiable function that maps vectors $\\\\boldsymbol{\\\\alpha}$ and $\\\\boldsymbol{\\\\beta}$ to an alignment. This serves as a general alignment operation.\\nSOTD utilizes this alignment to compute distance measures between sequences, which necessitates the introduction of a cost function. Specifically, SOTD represents one approach to comparing sequences using this function, but other methods could also be developed based on this alignment.\", \"the_contribution_is_therefore_two_fold\": \"the introduction of this differentiable alignment function and the development of SOTD, which employs it to define a distance measure between sequences.\\n\\n- **[Q6]** *L302, \\\"When the function F is powerful, the model can collapse \\\": be more precise than \\\"powerful\\\"? What types of specific functions would lead to collapse?*\\n\\nBy 'powerful,' we mean a function capable of adapting and performing a wide range of transformations. In soft-DTW, only the first and last elements of sequences are guaranteed to align, while all in-between frames or targets may be ignored; i.e., there is no guarantee that soft-DTW will yield a discrete monotonic alignment. A 'powerful' transformation $F$ can map $\\\\mathbf{x}$ to $F(${$\\\\mathbf{x}$}$)$ in such a way that soft-DTW ignores the in-between transformed frames ($F(${$\\\\mathbf{x}$}$)$) and targets ({$\\\\mathbf{y}$}), which we refer to as a collapse. This is why transformations learned through sequence comparison are typically constrained (e.g., to geometric transformations like rotations) [2]. Transformer architectures, being powerful, are prone to collapse, as demonstrated in a new experiment we conducted using soft-DTW as a loss function. On the 360h-LibriSpeech setup with Wav2Vec2-large [1] model, the best WER achieved using soft-DTW is 39.43. In comparison CTC yields 2.77, and our proposed OTTC yields 3.00. One of the key advantages of our method is that, by construction, such a collapse is not possible.\\nBased on the reviewer\\u2019s feedback, we will include these explanations and results in the revised version of the paper.\\n\\n[2] Titouan Vayer, L. Chapel, N. Courty, R\\u00e9mi Flamary, Yann Soullard, and R. Tavenard, \\u201cTime series alignment with global invariances,\\u201d *ArXiv, abs/2002.03848,* 2020a\\n\\n- **[Q7]** *L376, \\\"relaxation of the last term \\\": what does \\\"relaxation\\\" here and in the following mean...?*\\n\\nIn mathematical optimization, continuous relaxation refers to interpreting a discrete problem in a continuous manner. CTC paths represent discrete alignments, which are non-differentiable by nature. SOTD is a relaxation of a single path, as the alignment is now represented by $\\\\boldsymbol{\\\\gamma}$, which is continuously valued and differentiable. This approach ensures the differentiability of the path. For clarity, we have explicitly stated in the revised version of the paper that this refers to continuous relaxation.\\n\\n- **[Q8]** *L520, \\\"envision that learning label weights with suitable constraints can bridge the performance gap\\\", be more specific?*\\n\\nAs mentioned earlier, we conducted an oracle experiment where the label weights ($\\\\boldsymbol{\\\\beta}$) were derived from CTC, and observed that OTTC matches CTC performance in this case. Thus far, we have not found a more effective approach for selecting $\\\\boldsymbol{\\\\beta}$ than using a uniform distribution. While making \\\\$\\\\boldsymbol{\\\\beta}$ trainable is an option; however, if it is unconstrained, the model risks collapsing as seen with soft-DTW, as there would be no guarantee that every target is reached. \\nThe goal is to make $\\\\boldsymbol{\\\\beta}$ trainable while ensuring it does not contain zero coefficients, without excessively constraining it. This remains a challenging task, and our attempts so far have yielded only limited success. We will continue our investigations and hope the broader community may uncover effective solutions to this problem.\\n\\n- **[Q9]** *L048...., L190...., L195...., L271...., L325...., L455....,* and *L845....*\\n\\nWe thank the reviewer for carefully reading our paper and appreciate highlighting the typos. We have addressed them in the revised version of the paper.\"}",
"{\"summary\": \"This paper aims to learn sequence to sequence prediction and alignment simultaneously. To achieve this, the authors define a pseud-metric called the Sequence Optimal Transport Distance (SOTD) over sequences based on one-dimensional optimal transport. SOTD enables the joint optimization of target sequence prediction and alignment. They then derive the Optimal Temporal Transport Classification (OTTC) loss for automatic speech recognition (ASR). Experiments on the LibriSpeech and AMI datasets show that the proposed method achieves encouraging recognition accuracy, although it\\u2019s still worse than the popular sequence to sequence ASR modeling method Connectionist Temporal Classification (CTC). Besides, the alignment output from OTTC model does not have the peaky behavior observed in CTC-based models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The optimal Temporal Transport Classification (OTTC) loss proposed in this paper has two parts, one is alignment related, based on Sequences Optimal Transport which is proved to be differentiable. Another is classification (or prediction) related, which is based on the cross-entropy loss. The advantage of this new loss is it enables the model to lean both the alignment and classification jointly. The authors explain and prove every mathematical theory behind this loss definition in detail. The authors also conduct proof-of-concept experiments on the ASR task and compare the results with CTC-based model. The results show the proposed method achieve reasonable speech recognition accuracy and alignment. Especially the alignment is not peaky as observed in CTC based ASR model.\", \"weaknesses\": \"The main weakness of the paper is that the experimental results didn\\u2019t show the better ASR accuracy of the proposed method when compared with CTC loss. It\\u2019s claimed that the alignment from the OTTC model is better than that from CTC model because it\\u2019s not peaky. But the author didn\\u2019t compare the alignment with the ground truth alignment to measure the alignment accuracy with convincing numbers. Below are some detailed comments:\\n\\u2022\\tFigure 4 and figure 5 show the alignment of OTTC model, but none of them show the \\u201cground truth\\u201d alignment. And the paper didn\\u2019t present measurements of the alignment accuracy from OTTC model in other way. So, it doesn't seem credible to claim that OTTC could get better alignment since it\\u2019s not compared with the ground truth alignment. \\n\\u2022\\tThe target of OT part of the loss is to find out optimal \\u201calpha\\u201d(or alignment), it would be better if the author do some analyze of the value \\u201calpha\\u201d of the trained model to show what\\u2019s the optimal value and does it have any physical meaning. \\n\\u2022\\tIn this paper, the authors only show the results of uniform distribution of value \\u201cbeta\\u201d and said learning the optimal beta is difficult. It would be better to show how the model will perform with other choice of beta (e.g. proportional to the letter duration) to show how \\u201cbeta\\u201d will affect the results with different value? If the method is sensitive to the choice of \\u201cbeta\\u2019. Then more work needs to be done to make this method applicable for real machine learning tasks. \\nBesides, there are some typos (or errors) in the paper. like:\\n\\u2022\\tIn equation (3). If the dimension of gamma is n*m, and the dimensions of 1n is n*1. Then the multiplication of these two matrices is not valid. Similarly, the transpose of gamma has dimension m*n, it couldn\\u2019t be multiplied with the matrix with dimension m*1. \\n\\u2022\\tIn equation (11). \\u201cA\\u201d in the left side of equal sign should be \\u201cW\\u201d, also \\u201cAW\\u201d in the right side of equal sign should be \\\"W\\u201d.\", \"questions\": \"\\u2022\\tFor OTTC model, the OT related parameters are frozen for the last 10 epochs in the experiment? Why is number 10 used here and whether other values have been explored? Or how much will this parameter affect the results?\\n\\u2022\\tIn section 6, it\\u2019s said that in the 960h-LibriSpeech training setup, it got 4.77% WER at epoch 30 and no meaningful improvement in WER at 40 epochs without freezing the OT weights. Does it mean the final WER is also around 4.77%? It\\u2019s also said the alignments remain relatively stable as training progresses. If so, freezing alignment vs. no freezing alignment shouldn\\u2019t have big difference, but based on table 1, freezing OT weights in the last 10 epoch could get 4.24% WER. Could the author explain more about this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for the detailed reply!\", \"comment\": \"Thanks for reply.\\n\\nI would suggest including above experimental results (removing the OT weight prediction head and using fixed and uniform OT weights) in the revised manuscript. \\n\\nBy the way, to obtain the token-time alignments, you could use toolkits such as:\\n- Montreal Forced Aligner, https://montreal-forced-aligner.readthedocs.io/en/v3.1.2/index.html#\\n- torchaudio: https://pytorch.org/audio/stable/tutorials/forced_alignment_tutorial.html\\n\\nI would also suggest including quantitative metrics to show the proposed method can get improved alignments.\"}",
"{\"title\": \"Thanks for the detailed reply!\", \"comment\": \"OK. I have updated the score from 5 to 6.\"}",
"{\"comment\": [\"Thanks for the detailed reply. below are my comments.\", \"**[W1]**: The new results with Wav2Vec2-large as the seed also shows the obvious gap between OTTC and CTC: > 10% relative WER increase. We couldn't say \\\"OTTC performance is very similar to that of CTC\\\" with such gap.\", \"**[W2]**: Yes, there is no alignment information for the LibriSpeech set. But we could get alignment for it with some existing tools. like: https://github.com/pettarin/forced-alignment-tools. Avoiding peaky behavior doesn't equal to improve alignment accuracy. I still think author should prove their claim \\\" OTTC can learn accurate alignment than CTC model \\\" with proper metrics.\"]}",
"{\"summary\": \"The authors propose a noval end-to-end loss for automatic speech recognition (ASR), called Optimal Temporal Transport Classification (OTTC), jointly learning temporal alignment and audio frame classification. This loss is derived from the introduced Sequence Optimal Transport Distance (SOTD) framework, which constructs pseudo-metrics over the sequence space. Central to this framework is a parameterized and differentiable alignment model based on one-dimensional optimal transport, offering linear complexity in both time and space. Experimental results on Librispeech and AMI datasets demonstrate that the proposed method achieves promising performance.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The idea of this work is novel and non-incremental, supported by detailed mathematical theoretical proofs. Meanwhile, the writing style is excellent and impressive.\", \"In theory, the proposed method achieves linear complexity both in time and space. Experimental results demonstrate that it mitigates the peaky behavior observed in the Connectionist Temporal Classification (CTC) models.\"], \"weaknesses\": [\"The performance of the proposed method is noticeably inferior to that of CTC, which significantly restricts its applicability in real-world scenarios, especially considering that CTC already lags behind transducer and hybrid systems combining CTC and attention-decoder.\", \"The authors don't provide ablation experiments for the proposed OTTC model. I would suggest at least testing removing the OT weight prediction head and using fixed and uniform OT weights instead.\"], \"questions\": [\"While OTTC theoretically offers linear complexity both in time and space, how about the practical training cost in terms of the GPU memory usage and training time compared to CTC?\", \"In line 520-521, the authors state, \\\"Furthermore, our framework effectively addresses the peaky behavior commonly seen in CTC models, resulting in improved alignments\\\". To validate the claim of improved alignments, I would suggest computing quantitative metrics by comparing the decoding timestamps and the pre-computed ground-truth token-time alignments.\", \"I noticed several possible typos:\", \"In line 156-157, \\\"alignement\\\" should be corrected to \\\"alignment\\\".\", \"In line 253-254, \\\"[\\\" should be corrected to \\\"]\\\".\", \"In Equation 3, should \\\"$\\\\gamma 1_n=\\\\alpha$ and $\\\\gamma^T 1_m=\\\\beta$\\\" be \\\"$\\\\gamma 1_m=\\\\alpha$ and $\\\\gamma^T 1_n=\\\\beta$\\\"?\", \"In Equation 11, should \\\"A\\\" be corrected to \\\"W\\\"?\", \"In Equation 12, should \\\"$\\\\log p_{l_j} (x_i)$\\\" be corrected to \\\"$\\\\log p_{l_{y_j}} (x_i)$\\\"?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your responses, and for incorporating some of my feedback. I think the paper is (even) stronger now, but I will keep my original score. This is definitely an interesting way of framing the alignment problem -- the challenge now is to beat the state of the art.\"}",
"{\"title\": \"Author Response to Reviewer 6GZM (2/3)\", \"comment\": \"- **[W3]** *The target of OT part of the loss is to find out optimal \\u201calpha\\u201d(or alignment), it would be better if the author do some analyze of the value \\u201calpha\\u201d of the trained model to show what\\u2019s the optimal value and does it have any physical meaning.*\\n\\nAlthough it would be interesting to systematically derive a physical interpretation of an \\u201calpha\\u201d, we are not aware of such an interpretation. Currently we hypothesize that the optimal alpha may denote precise localization of spoken letters in the audio, and its cumulative sum may signify the speaking rate, as this would likely be easier for the model's classification part to learn. However, additional analysis is necessary to finalize this relation. \\n\\n- **[W4]** *In this paper, the authors only show the results of uniform distribution of value \\u201cbeta\\u201d and said learning the optimal beta is difficult. It would be better to show how the model will perform with other choice of beta (e.g. proportional to the letter duration) to show how \\u201cbeta\\u201d will affect the results with different value? If the method is sensitive to the choice of \\u201cbeta\\u2019. Then more work needs to be done to make this method applicable for real machine learning tasks.*\\n\\nWe would like to thank the reviewer for bringing up this important point. Based on the reviewers\\u2019 recommendation, we conducted several experiments to show how the model would perform for other choices of beta.\\n1) To show the importance of making beta learnable, we first experiment with learning beta using a trainable transformer decoder layer with tokenized reference text as input. We observe a degenerate solution in which all label weights (beta) are assigned to a single token, while all other tokens receive zero label weights, resulting in 100% WER. Intuitively, this behavior makes sense because the model can learn this shortcut, which still minimizes the loss (the loss goes to zero), as there are no constraints in the loss to prevent it. Next, we impose a constraint on the learnable beta values, ensuring they cannot fall below a certain threshold. However, we observe a slight degradation in performance, with around 1% degradation in WER for the 360-hour LibriSpeech setup.\\n2) To further highlight the importance of beta, we conducted an oracle experiment. In this experiment, we first force-align audio frames and text tokens using a CTC-based model trained on the same data. We then use this alignment to calculate beta values. In both 100h-LibriSpeech and 360h-LibriSpeech setups, the OTTC model matches the performance of the CTC model. This result underscores the critical role of the choice of beta (whether learnable or not). With an appropriate selection of beta, the OTTC model can achieve performance on par with, or potentially surpass, the CTC-based model.\\n\\nWe will include these results in the revised version of the paper in Appendix A.3. Overall, the proposed framework shows great promise for ASR, and we hope our work paves the way for a new approach in the field\\n\\n- **[W5]** *Besides, there are some typos (or errors) in the paper. like: \\u2022 In equation (3). If the dimension of gamma is nm, and the dimensions of 1n is n1. Then the multiplication of these two matrices is not valid. Similarly, the transpose of gamma has dimension mn, it couldn\\u2019t be multiplied with the matrix with dimension m1. \\u2022 In equation (11). \\u201cA\\u201d in the left side of equal sign should be \\u201cW\\u201d, also \\u201cAW\\u201d in the right side of equal sign should be \\\"W\\u201d.*\\n\\nAll the typos have been addressed. Thank you for your feedback.\\n\\n- **[Q1]** *For OTTC model, the OT related parameters are frozen for the last 10 epochs in the experiment? Why is number 10 used here and whether other values have been explored? Or how much will this parameter affect the results?*\\n\\nIn our investigations so far, we arbitrarily selected this number of epochs as a hyperparameter without any tuning. To further understand its impact, we conducted additional experiments on the 360h-LibriSpeech setup using the Wav2Vec2-large model, when freezing the OT weights prediction head for the last 5 and 15 epochs. When frozen for the last 5 epochs, we achieve a WER of 3.01; for the last 15 epochs, the WER is 3.10. As shown in the table above, freezing the OT head for the last 10 epochs results in WER of 3.00. Based on these results, it appears that the model\\u2019s performance doesn't change significantly given the model is trained for a few more epochs after freezing the alignment part of the OTTC model.\"}",
"{\"comment\": [\"**[W4]**: These are interesting experiments, but they showed it's difficult to find optimal beta for this method now. This would be an important direction to improve this method.\"]}",
"{\"title\": \"Author Response to Reviewer 6GZM\", \"comment\": \"While we acknowledge the performance gap with CTC, as you noted, we believe our work marks a significant first step in introducing a differentiable alignment framework with the flexibility to define distances/metrics over sequences (e.g., SOTD in our paper). Our experiments show that the framework has potential, and with further work from the community, it may eventually surpass state-of-the-art results.\\n\\nWe have now uploaded a revised version of our manuscript, incorporating experiments and insights based on the discussion with you and other reviewers (please refer to our global comment for details). We kindly invite you to review the changes, and we hope you will reconsider your scores in light of these updates.\"}",
"{\"title\": \"Author Response to Reviewer 6GZM (3/3)\", \"comment\": \"- **[Q2]** *In section 6, it\\u2019s said that in the 960h-LibriSpeech training setup, it got 4.77% WER at epoch 30 and no meaningful improvement in WER at 40 epochs without freezing the OT weights. Does it mean the final WER is also around 4.77%? It\\u2019s also said the alignments remain relatively stable as training progresses. If so, freezing alignment vs. no freezing alignment shouldn\\u2019t have big difference, but based on table 1, freezing OT weights in the last 10 epoch could get 4.24% WER. Could the author explain more about this?*\\n\\nWhen alignments are not frozen for the last 10 epochs, the WER is around 4.77%. However, when alignments are frozen for the last 10 epochs, the WER we obtain (as reported in Table 1) is 4.24%.\\n\\nWe believe that our use of the terms \\u201calignments remain relatively stable\\u201d might not have been very precise and caused confusion. By stating that alignments remain relatively stable we meant that we experimentally observed changes only at the extremities of the consecutive group of frames assigned to a token. An example of evolution of alignment in the OTTC model during training for 40 epochs, without freezing OT weights prediction head (alpha predictor), is shown in Figure 7 (Appendix A.3.1) of the updated paper. Please note that during the initial phase of training, there is significant left/right movement of boundary frames for all groups. As training progresses, the movement typically stabilizes to around 1-2 frames.\\n\\nWhile this can be considered \\u201crelatively stable\\u201d in terms of alignment, the classification loss (i.e., cross-entropy) in the OTTC framework is still considerably affected by these changes. This change of the loss is what impacts the final performance and the difference between freezing or not-freezing the alignments. Based on the reviewers\\u2019 feedback, we have added this explanation in the revised version of the paper.\\n\\nWe hope we have addressed the reviewer's concerns and would be happy to provide any additional details if needed. We sincerely hope the reviewer will consider revising the scores in light of the clarifications provided.\"}",
"{\"title\": \"Author Response to Reviewer 6GZM\", \"comment\": \"Dear Reviewer,\\n\\nWe hope that our latest revisions have addressed your concerns. If there are any additional details or clarifications we can provide, please let us know, as the discussion period ends tomorrow (26th) AoE. Thank you for your time and valuable comments.\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"Author Response to Reviewer bmks (1/2)\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback and insightful comments, which have helped us improve the clarity and quality of our work. Below, we address each of the review points in detail.\\n\\n**[W1]** Thank you for the suggestion. Based on the recommendation, we conducted an experiment where we first obtained the best path (forced-alignment using the Viterbi algorithm) from a trained CTC-based model on the same dataset and then trained a model to learn this single best path using Cross-Entropy. On the 360-hour LibriSpeech setup with Wav2Vec2-large [1] as the pre-trained model, this approach achieved a WER of 7.04 on the test-clean set and 13.03 on the test-other set. In comparison, the OTTC model under the same setup achieved significantly better results, with a WER of 3.00 on test-clean and 7.44 on test-other. These findings suggest that future work on single-path alignment strategies could benefit from building upon our proposed framework.\\n\\n[1] A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, \\u201cwav2vec 2.0: A framework for self-supervised learning of speech representations,\\u201d in *NeurIPS,* 2020.\\n\\n- **[Q1]** *Adding to my earlier comment, if indeed a limitation of the work is that only a single path is learned, and though this of practical/computational interest, if further investigation were to reveal that this actually hurts generalization, could one formulate SOTD so as to learn multiple transports for each utterance?*\\n\\nIn our earlier investigations, we explored learning multiple heads for both logits and OT weights ($\\\\boldsymbol{\\\\alpha}$). The outputs from the OT prediction heads were averaged using either the geometric or arithmetic mean for alpha, and the arithmetic mean for logits. \\nAlthough this approach led to faster convergence, it ultimately resulted in worse performance, prompting us to abandon this direction. \\n\\nWe believe the best way to match or surpass the state of the art is by making $\\\\boldsymbol{\\\\beta}$ learnable with appropriate constraints or by refining the selection of static $\\\\boldsymbol{\\\\beta}$. Towards this end, we conducted an oracle experiment. In this experiment, we first force-align audio frames and text tokens using a CTC-based model trained on the same data. We then use this alignment to calculate $\\\\boldsymbol{\\\\beta}$ values. For example, given the target sentence \\\"YES\\\" and the best valid path from the Viterbi algorithm was \\\"($\\\\phi Y \\\\phi \\\\phi E E S$)\\\" we re-labeled it to \\\"($\\\\phi Y \\\\phi E S$)\\\" and set $\\\\boldsymbol{\\\\beta}$ = [1/7, 1/7, 2/7, 2/7, 1/7].\\nThis approach enabled OTTC to learn a uniform distribution for $\\\\boldsymbol{\\\\alpha}$, mimicking CTC's highest probability path. As a result, in both the 100h-LibriSpeech and 360h-LibriSpeech setups, the OTTC model converged much faster and matched CTC performance. This experiment underscores the critical role of $\\\\boldsymbol{\\\\beta}$, suggesting that a better strategy for its selection or training could lead to further improvements.\\n\\n- **[Q2]** *L128: re: the \\\"d-dimensional vector sequences\\\": the writing suggests that x_i and y_i are both d-dimensional, is that intended...?*\\n\\nYes, this is intentional. In the ASR context, $x_i$ represents a frame, while $y_j$ is the label, typically encoded as a one-hot vector. More broadly, the SOTD framework can be applied to any type of data (e.g., vectors, matrices, or tensors), provided a differentiable distance can be computed in the respective space.\\n\\n- **[Q3]** *L207, \\\" coupling matrix \\u03b3\\u2217 \\\": is there a more informative term? It suggests this is an alignment matrix, but then AIU each entry is actually an amount of mass moved from i to j.*\\n\\nThe term \\\"coupling matrix\\\" for $\\\\boldsymbol{\\\\gamma}$ is commonly used in optimal transport (OT). While it could potentially be confused with an alignment, which represents a discrete binary relation between bins (either connected or not), we opted not to refer to $\\\\boldsymbol{\\\\gamma}$ as a \\\"soft coupling\\\" to avoid any confusion for readers familiar with OT theory. Although $\\\\boldsymbol{\\\\gamma}$ describes the continuous relationship between bins in terms of the mass moved, we chose to retain the term \\\"coupling\\\" and referred to the usual alignment as a \\\"discrete alignment\\\" for clarity.\\n\\n- **[Q4]** *L223, \\\"\\u03b1\\u2192\\u03b3\\u2217 =argmin_{\\u03b3\\u2208\\u0393} W(\\u03bc[\\u03b1,n],\\u03bd[\\u03b2,m]),\\\" .....* and *L241, \\\"The computational cost of these alignment functions is low,\\\" explain why? (relating to previous comment).....*\\n\\nThe monotonicity described above is the key property for computing these quantities. If the bins are not sorted, the complexity is O(max(n, m) log(max(n, m))); otherwise, it is O(max(n,m)). An intuitive algorithm for this computation is provided in Appendix 1.1.1. In response to the reviewer\\u2019s question, we have expanded this appendix to further clarify both the computation and complexity. Additionally, we have included a figure in this appendix to visually illustrate the process.\"}",
"{\"title\": \"Revised version of the paper\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your valuable feedback. In response, we have added a new section (A.3) in the appendix, which includes the following updates:\\n\\n- **Expanded Results:** We evaluate the OTTC framework using Wav2Vec2-large as the pre-trained model, further narrowing the performance gap between OTTC and CTC.\\n- **Ablation Studies:** We conduct experiments on key design choices, such as single-path alignment, fixed OT weights, and the impact of freezing the *OT weights prediction head* across epochs. We also highlight the importance of learnable $\\\\boldsymbol{\\\\beta}$ and include an oracle experiment showcasing OTTC\\u2019s potential to match or potentially exceed CTC. Additionally, we discuss the limitations of soft-DTW, such as its susceptibility to alignment collapse, and provide a comparative analysis with OTTC.\\n- **Alignment Analysis:** We provide quantitative metrics (precision, recall, F1 score, intersection duration ratio) and visualizations to evaluate alignment performance. Our results show that OTTC achieves better alignment performance compared to CTC. Additionally, we discuss the temporal evolution of alignments during training.\\n\\nWe hope these additions comprehensively address your suggestions and improve the clarity of our work.\\n\\nBest,\\n\\nThe Authors\"}",
"{\"title\": \"Author Response to Reviewer 6GZM\", \"comment\": \"We thank the reviewer for their reply.\\n- **[W1]** In presenting the new results using Wav2Vec2-large as the seed, our intent was to demonstrate the potential of our approach further, particularly as the gap with CTC is significantly reduced compared to the results obtained with XLSR in the original paper.\\nWe fully acknowledge the reviewer\\u2019s point that a relative WER increase of >10% shows that our approach is not yet on par with CTC. However, we would like to emphasize that our method introduces a completely novel framework, which is non-incremental in nature. We hope the community will recognize its potential and build upon it to advance its performance, much like how substantial research efforts were required to make CTC competitive with traditional hybrid systems (please see 2nd paragraph of Section 1 in [3]). We view this work as an important first step, and we believe it lays a foundation for further advancements.\\n\\n[3] D. Povey, V. Peddinti, D. Galvez, P. Ghahrmani, V. Manohar, X. Na, Y. Wang, and S. Khudanpur, \\u201cPurely sequence-trained neural networks for ASR based on lattice-free MMI,\\u201d in *INTERSPEECH,* 2016\\n\\n- **[W2]** We thank the reviewer for suggesting tools such as forced-alignment frameworks to obtain ground truth alignments. However, we initially refrained from using these tools due to the limitations of forced alignment as a substitute for true ground truth. During forced alignment generation, models predict probabilities over vocabularies (for CTC-based models) or PDF-IDs (for hybrid systems), and the Viterbi algorithm is subsequently used to infer frame-level alignments. This process is inherently influenced by the biases of the underlying models.\\n\\nNevertheless, We identified that forced alignment from the AMI dataset has been used in past works [4, 5] to measure alignment performance. Following the methodology in [4], we calculated precision (P), recall (R), and F1 score. It is important to note that we only considered word-level timestamps, as they are typically less erroneous than individual phoneme or sub-word level timestamps. The results are as follows:\", \"ctc_model\": \"Intersection Duration Ratio = 17.19%\", \"ottc_model\": \"Intersection Duration Ratio = 42.12%\\n\\nThis highlights that, on average, the CTC model either predicts the start of words with significant delay or assigns very few audio frames to non-blank symbols (resulting in a peaky behavior).\\n\\n[4] E. Rastorgueva, V. Lavrukhin, and B. Ginsburg, \\u201cNeMo Forced Aligner and its Application to Word Alignment for Subtitle Generation,\\u201d in *Proc. INTERSPEECH*, 2023.\\n\\n[5] Max Bain, Jaesung Huh, Tengda Han, and Andrew Zisserman. \\\"WhisperX: Time-accurate speech transcription of longform audio,\\\" In *Interspeech*, 2023.\\n\\n- **[W4]** We thank the reviewer for their thoughtful feedback and for recognizing the value of our experiments. We agree that finding appropriate values for beta is an important direction for future research, which we believe will unlock the full potential of the proposed framework. This has been emphasized in the paper.\\nHowever, we respectfully disagree with the statement that our experiments demonstrate finding an optimal beta is inherently difficult. The optimal beta may not necessarily be the one derived from forced-alignment using a CTC model, as such alignments can still suffer from issues like label delay. Instead, the oracle experiment serves to demonstrate that beta is a critical parameter, and further research is necessary to identify the truly optimal values.\\n\\nEven with the simple uniform beta used in our experiments, the proposed method achieves reasonable performance, showcasing the robustness and promise of the framework even in the absence of optimized beta values.\", \"the_results_for_this_metric_are_as_follows\": \"\"}",
"{\"comment\": \"I agree that the proposed method introduces a completely novel framework to find the alignment between two sequences other than CTC. But my concern is still the lower accuracy. Besides, how to tune parameter \\\"beta\\\" need more investigations.\"}",
"{\"title\": \"Author Response to Reviewer wARR\", \"comment\": \"We thank the reviewer for their reply.\\n\\nThanks for the suggestion. We will include these details and experimental results in the revised manuscript.\\n\\nWe thank the reviewer for suggesting tools such as forced-alignment frameworks to obtain ground truth alignments. However, we initially refrained from using these tools due to the limitations of forced alignment as a substitute for true ground truth. During forced alignment generation, models predict probabilities over vocabularies (for CTC-based models) or PDF-IDs (for hybrid systems), and the Viterbi algorithm is subsequently used to infer frame-level alignments. This process is inherently influenced by the biases of the underlying models.\\n\\nNevertheless, We identified that forced alignment from the AMI dataset has been used in past works [3, 4] to measure alignment performance. Following the methodology in [3], we calculated precision (P), recall (R), and F1 score. It is important to note that we only considered word-level timestamps, as they are typically less erroneous than individual phoneme or sub-word level timestamps. The results are as follows:\", \"ctc_model\": \"Intersection Duration Ratio = 17.19%\", \"ottc_model\": \"Intersection Duration Ratio = 42.12%\\n\\nThis highlights that, on average, the CTC model either predicts the start of words with significant delay or assigns very few audio frames to non-blank symbols (resulting in a peaky behavior).\\n\\n[3] E. Rastorgueva, V. Lavrukhin, and B. Ginsburg, \\u201cNeMo Forced Aligner and its Application to Word Alignment for Subtitle Generation,\\u201d in *Proc. INTERSPEECH,* 2023.\\n\\n[4] Max Bain, Jaesung Huh, Tengda Han, and Andrew Zisserman. WhisperX: Time-accurate speech transcription of longform audio. In *Interspeech,* 2023.\\n\\nWe will include the performance of these alignment metrics in the revised manuscript as well.\", \"the_results_for_this_metric_are_as_follows\": \"\"}",
"{\"title\": \"Author Response to Reviewer wARR\", \"comment\": \"We thank the reviewer for their reply.\\n\\n**1.** Yes, the model can still be trained successfully without the OT weights prediction. As demonstrated in our previous experiment (detailed in the response above), we trained the model on the 360-hour LibriSpeech setup with Wav2Vec2-large as the pre-trained model, using fixed and uniform OT weights. The results showed a WER of 3.51 on test-clean (compared to 2.77 for CTC and 3.0 for OTTC with learnable OT weights) and 8.24 on test-other (compared to 6.58 for CTC and 7.44 for OTTC with learnable OT weights).\\n\\nThese results indicate that while the model can indeed be trained, the use of fixed OT weights leads to degraded WERs and a complete loss of localization. We hypothesize that fixed OT weights ($\\\\boldsymbol{\\\\alpha}$) struggle to account for variations in speaking rates across the training set, which may explain the observed challenges in performance.\\n\\n**2.** We thank the reviewer for bringing up this important point. To further investigate the potential of the OTTC framework, we conducted an oracle experiment. In this experiment, we used a CTC-based model trained on the same dataset to force-align audio frames and text tokens, thereby generating oracle $\\\\boldsymbol{\\\\beta}$ values. These $\\\\boldsymbol{\\\\beta}$ values were then used to train the OTTC model. Remarkably, under this setup, the OTTC model matched the performance of the CTC model on both 100h-LibriSpeech and 360h-LibriSpeech setups.\\nThese results underscore the promise of OTTC as a framework for ASR. With an appropriately chosen fixed or learnable $\\\\boldsymbol{\\\\beta}$, OTTC can achieve performance on par with, or potentially surpass, that of CTC, while avoiding the peaky behavior characteristic of CTC.\\n\\nWe hope we have addressed the reviewer's concerns and would be happy to provide any additional details if needed. We sincerely hope the reviewer will consider revising the scores in light of the clarifications provided.\"}",
"{\"summary\": \"This is a well-written submission describing a novel application of Optimal Transport to the fundamental alignment problem in ASR. Re-casting alignments as a matrix of coupling weights representing the \\\"transport\\\" of units in a source alignment to units in a target alignment the authors to use concepts from Optimal Transport to minimize the overall transport cost efficiently and flexibly, in a way that mitigates the peaky behavior typically observed in ASR alignments based on the CTC model with dynamic programming. The work presents ASR results (WERs) on well-known public domain tasks (LibriSpeech and AMI); the proposed method trails the standard CTC model, but the work offers a fresh perspective on a long-standing challenge in core ASR modeling technologies. As such I think the work is a high interest to the community.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Clarity of presentation, quality of the writing, and originality; very good literature survey & references.\", \"weaknesses\": \"A few concepts could be explained more clearly. One thought regarding the evaluation: since the proposed method only learns a single alignment path, it might make sense to include a comparison with CTC models trained using the single best alignment paths for any given training utterance (aka the \\\"Viterbi algorithm\\\") rather than the standard sum over all possible alignments. I am wondering if the gap in WER between proposed method and standard CTC comes from the use of a single path, versus multiple paths. This could be part of the evaluation.\", \"questions\": \"Adding to my earlier comment, if indeed a limitation of the work is that only a single path is learned, and though this of practical/computational interest, if further investigation were to reveal that this actually hurts generalization, could one formulate SOTD so as to learn multiple transports for each utterance?\\n\\nMore comments, mostly nits re: writing:\\n\\nL048, \\\"requires comparatively large amount of data\\\" --> \\\"requires a comparatively large amount of data\\\"\", \"l128\": \"re: the \\\"d-dimensional vector sequences\\\": the writing suggests that x_i and y_i are both d-dimensional, is that intended...?\", \"l190\": \"specify what is a \\\"\\u03b4 measure\\\"?\\n\\nL195, \\\"\\u03bd[\\u03b2, n]\\\", I think this should be \\\"\\u03bd[\\u03b2, m]\\\"?\\n\\nL207, \\\" coupling matrix \\u03b3\\u2217 \\\": is there a more informative term? It suggests this is an alignment matrix, but then AIU each entry is actually an amount of mass moved from i to j.\\n\\nL223, \\\"\\u03b1\\u2192\\u03b3\\u2217 =argmin_{\\u03b3\\u2208\\u0393} W(\\u03bc[\\u03b1,n],\\u03bd[\\u03b2,m]),\\\" give the reader a heads up, How will \\u03b3 typically be found... gradient descent, or some other method? Give an intuition about the east/difficultly therein?\\n\\nL241, \\\"The computational cost of these alignment functions is low,\\\" explain why? (relating to previous comment). (The cost seems to go beyond just the bins being sorted or not, but perhaps that is in fact the key aspect it is not completely clear to me).\\n\\nL252, \\\"Sequences Optimal Transport Distance (SOTD)\\\": motivate this extension more? I.e., make it clear what is missing from the alignment model presented so far.\\n\\nL271, \\\"there is sequences\\\" : fix typo\\n\\nL302, \\\"When the function F is powerful, the model can collapse \\\": be more precise than \\\"powerful\\\"? What types of specific functions would lead to collapse?\\n\\nL325, \\\"Ce\\\", define Cross-Entropy somewhere? This would make e.g. Eq. (14) clearer .\\n\\nL376, \\\"relaxation of the last term \\\": what does \\\"relaxation\\\" here and in the following mean...?\\n\\nL455, \\\"peror-mance\\\", fix typo\\n\\nL520, \\\"envision that learning label weights with suitable constraints can bridge the performance gap\\\", be more specific?\\n\\nL845, \\\"SOTD ARE PSEUDO METRIC\\\" --> \\\"SOTD IS A PSEUDO METRIC\\\"?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}"
]
} |
EMMnAd3apQ | ToVE: Efficient Vision-Language Learning via Knowledge Transfer from Vision Experts | [
"Yuanchen Wu",
"Junlong Du",
"Ke Yan",
"Shouhong Ding",
"Xiaoqiang Li"
] | Vision-language (VL) learning requires extensive visual perception capabilities, such as fine-grained object recognition and spatial perception. Recent works typically rely on training huge models on massive datasets to develop these capabilities. As a more efficient alternative, this paper proposes a new framework that Transfers the knowledge from a hub of Vision Experts (ToVE) for efficient VL learning, leveraging pre-trained vision expert models to promote visual perception capability. Specifically, building on a frozen CLIP image encoder that provides vision tokens for image-conditioned language generation, ToVE introduces a hub of multiple vision experts and a token-aware gating network that dynamically routes expert knowledge to vision tokens. In the transfer phase, we propose a "residual knowledge transfer" strategy, which not only preserves the generalizability of the vision tokens but also allows selective detachment of low-contributing experts to improve inference efficiency. Further, we explore to merge these expert knowledge to a single CLIP encoder, creating a knowledge-merged CLIP that produces more informative vision tokens without expert inference during deployment. Experiment results across various VL tasks demonstrate that the proposed ToVE achieves competitive performance with two orders of magnitude fewer training data. | [
"Vision-language Modeling",
"Knowledge Transfer",
"Vision Experts"
] | Accept (Poster) | https://openreview.net/pdf?id=EMMnAd3apQ | https://openreview.net/forum?id=EMMnAd3apQ | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xi8zumP0g5",
"q7e5eYEOl1",
"ptjX5c4N0v",
"pDBisBMXvq",
"ozOQ0R2Jaz",
"jytQd0UBkj",
"d3cCFOq4Y6",
"byFQwHcMV7",
"bfnquescOy",
"aNzkEu0jkD",
"YKzctA5QX0",
"Xq5mrBks5c",
"Eonwjy3Aww",
"BPq2rXmZdQ",
"9hh8yWEKCR",
"6peiwa9y4I",
"685z3kYmFf",
"5tqXtuXAo9",
"44l484kalo",
"35IXWqqeEM",
"2gWL9b2QYw"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1732346626646,
1737523954721,
1732188080624,
1732188073748,
1731915166004,
1730648889771,
1731913434098,
1732249566715,
1732264339931,
1732531531692,
1732188066099,
1729430720734,
1732202769735,
1731900265970,
1732248864289,
1734733315159,
1732542594451,
1730635244734,
1731903210696,
1732188086880,
1730776377065
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9014/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9014/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9014/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9014/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9014/Reviewer_q7GH"
],
[
"ICLR.cc/2025/Conference/Submission9014/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9014/Reviewer_6BHS"
],
[
"ICLR.cc/2025/Conference/Submission9014/Reviewer_6BHS"
],
[
"ICLR.cc/2025/Conference/Submission9014/Reviewer_B56D"
],
[
"ICLR.cc/2025/Conference/Submission9014/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9014/Reviewer_6BHS"
],
[
"ICLR.cc/2025/Conference/Submission9014/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9014/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9014/Reviewer_q7GH"
],
[
"ICLR.cc/2025/Conference/Submission9014/Area_Chair_heGD"
],
[
"ICLR.cc/2025/Conference/Submission9014/Reviewer_dYR6"
],
[
"ICLR.cc/2025/Conference/Submission9014/Reviewer_B56D"
],
[
"ICLR.cc/2025/Conference/Submission9014/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9014/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9014/Reviewer_dYR6"
]
],
"structured_content_str": [
"{\"comment\": \"UPDATES:\\nThanks to all reviewers for their thorough review and valuable comments. We have uploaded a new revised version of ToVE which incorporates the discussions of the concerns by the reviewers. We deeply appreciate your time and effort in helping us improve our work.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"Dear reviewer, we want to kindly follow up on the submitted rebuttal. Given the importance of your feedback in refining and improving the work, we would greatly appreciate it if you could review the rebuttal at your earliest convenience.\"}",
"{\"comment\": \"Dear reviewer, we want to kindly follow up on the submitted rebuttal. Given the importance of your feedback in refining and improving the work, we would greatly appreciate it if you could review the rebuttal at your earliest convenience.\"}",
"{\"comment\": \"We appreciate the reviewer's feedbacks. Here, we resolve each of your concerns below.\\n\\n---\\n\\n> The significance of ToVE and performance comparision with other methods.\\n\\nThank you for pointing out this concern. We would like to clarify that the motivation of this paper is to **transfer visual knowledge from pre-trained vision experts**, which have already acquired diverse visual understandings, **to vision-language learning**, thereby achiving efficient learning. \\n\\nAlthough it is true that incorporating vision experts in ToVE increases the parameter count, we emphasize that our training cost\\u2014measured in terms of training samples and computational resources\\u2014is **substantially lower than that of other comparable methods**.\\n\\nAdditionally, **we proposed strategies such as ToVE_lite and detachment of experts to mitigate inference costs**. Notably, **ToVE_lite, which operates without any vision experts, achieves competitive performance** compared to other methods (a more detailed response regarding ToVE_lite is provided below). We hope this explanation addresses your concerns.\\n\\n---\\n\\n> The effectivess of ToVE_lite\\n\\nWe would like to clarify that the results referenced in the review (Table 5) **do not correspond to using a single visual encoder**. **Instead, they represent the performance achieved when EVA is utilized as a vision expert** (i.e., the base CLIP vision encoder is combined with the EVA expert). As shown in Table 1-2, ToVE-lite, which employs a single knowledge-transferred CLIP as the vision encoder, achieves competitive performance, effectively demonstrating its efficacy.\\n\\nTo further address your concern, we have included additional results in the table below, where **only EVA is used as the vision encoder**. These results indicate that EVA does not exhibit significant advantages compared to using CLIP as the vision encoder. Moreover, **EVA performs worse than ToVE-lite**, which reinforces the effectiveness of our proposed approach.\\n\\n| dataset | CLIP as Encoder | EVA as Encoder | CLIP + EVA expert | ToVE-lite |\\n| ------- | --------------- | -------------- | ----------------- | --------- |\\n| NoCaps | 92.1 | 95.7 | 109.1 | 104.1 |\\n| VQAv2 | 70.0 | 70.5 | 74.4 | 74.0 |\\n| VSR | 54.8 | 51.7 | 63.8 | 65.9 |\\n\\n---\\n\\n> The amount of pretraining data for both the vision and language models should also be specified.\\n\\nWe apologize for any confusion caused by the definition of the pretraining cost. In our context, **pretraining data refers specifically to the datasets used exclusively during the development of ToVE**. This definition intentionally excludes the original datasets utilized in training the expert models, **adhering to common practices within the VLM community**. For example, recent VLMs (e.g., Prsimer, BLIP, and BLIP-2) typically leverage pre-trained base vision models (e.g., CLIP, SigLIP) and text models (e.g., BERT, Vicuna) **without considering these as part of the pretraining data**. \\n\\nImportantly, our objective is to **harness the knowledge encapsulated in expert models to minimize the data requirements for constructing VLMs**. We hope this clarification addresses the reviewer\\u2019s concern.\\n\\n---\\n\\n> ToVE with LLM setting and a more comprehensive evaluation.\\n\\nThank you for your valuable suggestion. In response, we have initiated experiments integrating ToVE into an LLM (the same setting as LLaVA). However, due to the substantial computational and time requirements associated with the training and fine-tuning stages, **these experiments are still ongoing**. We are committed to **providing updated results within the next few days**.\\n\\nWe acknowledge your recommendation to include a broader range of QA datasets for evaluation. **We will attach the results after the ToVE x LLM is finished**. Furthermore, we will include the CIDEr score for COCO in Table 3 in the revised version as recommended. Thank you for your comments, which will help improve the completeness and relevance of our work.\"}",
"{\"summary\": \"This paper introduces ToVE, a selection method for vision encoders in vision-language models. ToVE utilizes multiple vision encoders, each pre-trained differently, and uses a gating network to select their output tokens, which are then added to the tokens from a CLIP encoder as visual features. Notably, these vision encoders can be distilled into a single encoder to reduce inference computation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper proposes a novel method for enhancing vision-language models by leveraging multiple vision encoders.\\n2. Experiments demonstrate the superiority of using multiple vision encoders.\", \"weaknesses\": \"1. My primary concern is the significance of this approach. Using multiple vision encoders will multiply the parameter count and computational cost, making comparisons with other base-sized VLMs unfair. On the other hand, if distillation is used to merge them into a single encoder, the resulting composite encoder does not performs obviously better than simply using EVA as the encoder.\\n\\n| dataset | EVA | ToVE-lite |\\n| ----- | ----- | ----- |\\n| NoCaps | 109.1 | 104.1|\\n| VQAv2 | 74.4 | 74.0|\\n| VSR | 63.8 | 65.9|\\n| POPE-R | 85.7 | 86.6|\\n| POPE-C | 80.8 | 81.9|\\n| average |82.76|82.50|\\n\\n\\nThis suggests that using ToVE-lite might be less effective than carefully selecting a single, well-performing encoder.\\n\\n2. As mentioned above, due to the difference in parameter counts, comparisons with other VLMs may be unfair. Additionally, although the amount of VL data used for pretraining is indicated, the vision encoders and language models themselves are trained on large datasets, enhancing their individual visual and linguistic capabilities, which in turn boosts multimodal performance. Therefore, the amount of pretraining data for both the vision and language models should also be specified.\\n\\n3. The setting used in this paper seems somewhat outdated, as the current trend in VLMs is toward general-purpose multimodal LLMs, such as LLaVA. I recommend that the authors implement ToVE within the LLaVA setting to demonstrate its effectiveness in broader, more contemporary scenarios.\\n\\n4. The datasets used for comparison are somewhat limited. I suggest adding more datasets, such as GQA, TextVQA, and VizWiz, to provide a more comprehensive evaluation. Additionally, Table 3 should also report the CIDEr score for COCO.\", \"questions\": \"See weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We appreciate the reviewer's comments. Here, we resolve each of your concerns below.\\n\\n---\\n\\n> **The efficiency of ToVE for initial setup and training.**\\n\\nWe appreciate the reviewer\\u2019s comments regarding the computational complexity associated ToVE\\u2019s training process. We would like to clarify that our proposed method, ToVE, demonstrates significantly improved efficiency in training requirements compared to many widely used vision-language models, such as SimVLM, GIT, and BLIP. The table below provides **a comparative analysis of training data sizes and computational costs (PFLOPs Days)**:\\n\\n| Model | Training Data | Training Cost (# PFLOPs Days) |\\n| ------ | ------------- | ----------------------------- |\\n| SimVLM | 1.8B | 66.9 |\\n| GIT | 0.8B | 45.8 |\\n| BLIP | 129M | 22.2 |\\n| ToVE | 3M | 0.37 |\\n\\nAs shown, ToVE achieves substantial reductions in both training data size and computational cost. While we acknowledge that the process of routing expert knowledge to vision tokens (including experts, MLPs, gating networks) introduces some additional complexity, **it does not significantly diminish the computational efficiency brought by our vision knowledge transfer process**. We hope this clarification addresses the reviewer\\u2019s concerns regarding the practicality of our approach.\\n\\n---\\n\\n> **The discussion about the contribution of the load balancing loss in the training stage.**\\n\\nThank you for your valuable feedback. We apologize for the insufficient discussion of the load balancing loss. For ToVE, the implement of load balancing loss is essential as there is a high risk that **easily transferable experts dominate in the early stages of training in our early experiments**. This phenomenon is particularly obvious between low-level experts and embedding experts.\\n\\nFor low-level experts, ToVE learns from scratch how to perform patch embedding on their outputs to convert low-information data into tokens for the knowledge transfer. Conversely, learning the MLP mapping for embedding experts is relatively easier. Due to **the varying difficulty of knowledge transfer across experts**, the gating network without load balancing loss **tends to converge prematurely on embedding experts during early training**. This leads to diminished gradient flow for low-level experts (due to gating operations), causing the model to underutilize their valuable low-level information finally. \\n\\nWe list one group of early experiment results in the following table to clarify this point. For the case of adopting DINO + Depth experts, the gating score of Depth expert is close to 0 without the load-balancing loss, and **the final model performance degrades to that of using only the DINO expert**. Similar phenomena can be observed across other expert configurations.\\n\\n| Experts | DINO | DINO + Depth | DINO + Depth + load balancing |\\n| --------------------- | ----- | ------------- | ----------------------------- |\\n| Average Routing score | - | **0.99 vs. 0.01** | 0.74 vs. 0.26 |\\n| CIDER of COCO | 128.9 | **128.6** | 130.1 |\\n\\nIn the revised version of the paper, we will include these experimental results and a detailed discussion to highlight the critical role of load-balancing loss in improving model robustness and performance.\\n\\n---\\n\\n> **The applicability in highly specialized domain.**\\n\\nThank you for your insightful comment. While our current experiments may not comprehensively address the applicability of ToVE in highly specialized domains, **we have demonstrated that different pre-trained experts can effectively transfer valuable knowledge to specific vision-language tasks requiring diverse vision capabilities**. For instance, the DINO model is leveraged for spatial reasoning, while low-level experts contribute to object perception. These results are detailed in Table 5 of the paper.\\n\\nFor future work, we plan to extend ToVE to more specialized domains, such as medical report generation. A potential application would involve the MIMIC-CXR dataset [1], which comprises approximately 300k chest radiographs. In this context, we intend to utilize segmentation and classification models, which are widely available, as domain-specific experts to evaluate the adaptability and effectiveness of ToVE in specialized scenarios.\\n\\n[1] MIMIC-CXR: A de-identified, publicly available database of chest radiographs with free-text reports.\"}",
"{\"comment\": \"Thanks for your reply, you solved my doubt and I am willing to improve my score.\"}",
"{\"comment\": \"I paid attention to the experiment results on LLaVA commented by Reviewer q7GH, and I think this result really proves the effectiveness of the method, so I am willing to further improve my score.\"}",
"{\"comment\": \"Thank you for addressing my concerns. I am not fully convinced with the comparisons, but I also understand that it is quite difficult to find exact baselines to compare with in this field at the moment. I like this paper and would like to see it get accepted.\"}",
"{\"comment\": \"Dear reviewer, we want to kindly follow up on the submitted rebuttal. Given the importance of your feedback in refining and improving the work, we would greatly appreciate it if you could review the rebuttal at your earliest convenience.\"}",
"{\"summary\": \"This paper proposes a method called ToVE, which fuses visual information from different visual encoders as the input of language model in a vision-language model. Specifically, the authors use CLIP-ViT as the main input and use it to weight the tokens from other backbones. Furthermore, the authors proposed a distillation algorithm to teach knowledge from different visual backbones to CLIP-ViT. Experiments show that the proposed method can bring certain improvements to the performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) The proposed method is easy to understand.\", \"weaknesses\": \"1) The organization of this paper is somewhat confusing, especially the methods section.\\n2) The computation overhead can be siginifcantly higher than other methods.\\n3) The improvement of model capabilities may come from the introduction of a stronger visual backbone, rather than each model playing its own role in its professional field.\", \"questions\": \"1) This article is confusing in several ways:\\n\\n a) The author needs to further explain why setting the weights of tokens from unnecessary backbones to -inf in Formula 3 can improve computational efficiency. And authors should clarify if/how they are able to avoid activating all experts.\\n\\n b) In the section ``Enhancing Exploration of Vision Experts,'' the authors use the concept of L_aux but only cite it, lacking an explanation of how to apply it in this method. The authors should provide some explanation in the main text.\\n\\n2. The motivation behind the manuscript and the source of the proposed method's performance need further clarification. From Fig. 8 and Fig. 9, it appears that the strong visual backbone, EVA, plays the primary role in most scenarios. Have the authors considered re-running the baseline experiment using EVA as the sole visual feature extractor? This will help clarify the contribution of the other experts beyond EVA and provide deeper insights into the model's overall performance.\\n\\n3. There are some methods that ToVE should compare with:\\n\\n [1] Zi-Yi Dou, Yichong Xu, Zhe Gan, et.al.. An Empirical Study of Training End-to-End Vision-and-Language Transformers. CVPR 2022.\\n\\n [2] Wonjae Kim, Bokyung Son, Ildoo Kim. ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision. ICML 2021\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"UPDATE:\\n\\nDear reviewer, following your recommendation, we have conducted additional experiments incorporating ToVE with an LLM. Specifically, we implemented ToVE within the LLaVA-1.5 framework (ToVE x Vicuna) using LLaVA\\u2019s training data, where we randomly sampled 3/4 of the pretraining data and utilized the full instruction-tuning dataset. In the ToVE design, we introduced two experts (DINO and Depth) and employed QLoRA to reduce the computational burden.\\n\\nDue to the time constraints during the rebuttal phase, we were unable to perform extensive hyperparameter tuning and full experiments. As such, there still remains needs for further optimization, and we kindly ask for your understanding. Below are the preliminary experimental results. These demonstrate that **knowledge transfer significantly enhances the model\\u2019s perception capabilities compared to the original baseline**.\\n\\n| Models | MME_p | TextVQA | MMStar (Overall) | MMStar (Coarse Perception) | MMStar (Fine-grained Perception) |\\n| ------------------------ | ------ | ------- | ---------------- | -------------------------- | -------------------------------- |\\n| LLaVA-1.5-7B (QLoRA) | 1434.5 | 49.7 | 34.6 | 61.6 | 27.6 |\\n| ToVE x LLaVA-1.5 (QLoRA) | **1523.1** | **50.4** | **35.8** | **64.0** | **31.2** |\\n\\nWe acknowledge your observation regarding the current trend toward general-purpose multimodal LLMs and agree that such models hold significant value. However, we believe there remains practical merit in leveraging smaller language models. These models allow for efficient training tailored to specific tasks, such as image captioning and VQA, which are highly relevant for practical applications.\\n\\nLastly, we sincerely thank you for your constructive comments. We recognize the importance of general-purpose multimodal LLMs and plan to explore them further in future work.\"}",
"{\"comment\": \"We appreciate the reviewer's comments. Here, we resolve each of your concerns below.\\n\\n---\\n\\n> The author needs to further explain why setting the weights of tokens from unnecessary backbones to -inf in Formula 3 can improve computational efficiency. And authors should clarify if/how they are able to avoid activating all experts.\\n\\nThank you for your feedback. We would like to **it is NOT the case that setting the gating value of a specific expert to -inf directly improves computational efficiency**. INSTEAD, the computational efficiency is **achieved by detaching the low-contributing expert entirely from the architecture during inference**, as these experts are not deeply coupled in ToVE. This is the reason for the improved computational efficiency, as detailed in **Lines 226\\u2013229**.\\n\\nRegarding the rationale for setting the gating value of detached experts to -inf, this strategy **ensures proper reconciliation of ensemble weights** during the SoftMax operation. When detaching Expert N, the gating network will still assign a gating score to Expert N. Assigning a value of -inf is to ensure that **its ensemble weight becomes 0** during the expert knowledge fusion process (**please see Lines 189\\u2013201 for details**).\\n\\n---\\n\\n> In the section ``Enhancing Exploration of Vision Experts,'' the authors use the concept of L_aux but only cite it. The authors should provide some explanation in the main text.\\n\\nThank you for your feedback. We would like to clarify that the details of L_aux within ToVE **have been elaborated in Appendix A.3**. The decision to exclude it from the main text stems from its role as an auxiliary learning loss in ToVE, which is **not the primary focus of our contributions**. We hope for your understanding on this matter since these has limited space in the main paper.\\n\\n---\\n\\n> Have the authors considered re-running the baseline experiment using EVA as the sole visual feature extractor? This will help clarify the contribution of the other experts beyond EVA and provide deeper insights into the model's overall performance.\\n\\nThank you for this insightful comment. In our early experiments, we have tested using EVA as the base vision encoder of ToVE. The results are presented below. As observed, **the performance of CLIP and EVA as standalone vision encoders is comparable**. Moreover, there are some works that adopt CLIP as the vision encoder, such as Prismer and BLIP. For the above reasons, we selected CLIP as the base vision encoder for ToVE.\\n\\n| dataset | CLIP as Encoder | EVA as Encoder | CLIP + EVA expert |\\n| ------- | --------------- | -------------- | ----------------- |\\n| NoCaps | 92.1 | 95.7 | 109.1 |\\n| VQAv2 | 70.0 | 70.5 | 74.4 |\\n| VSR | 54.8 | 51.7 | 63.8 |\\n\\nWe also explored the reasoning behind the substantial performance improvement observed when combining CLIP and EVA as you noted in our experiments. In Table 5, **the contribution of EVA expert (CLIP + EVA) is mainly in the tasks that require semantic understanding**, such as caption and VQA. We attribute this improvement to EVA\\u2019s ability to effectively process background representations.\\n\\nFigures 5 and 9 provide visualizations of the gating map for each vision expert, where **EVA\\u2019s gating activations are predominantly observed in background regions**, with minimal activation in subject areas. This finding suggests that while low-level experts and DINO focus on visual perceptual knowledge (this is also supported by the results in the visual perception tasks), their contributions to background context understanding are limited. In contrast, EVA **enhances semantic comprehension in these regions for the base vision encoder**, benefiting the overall performance.\\n\\n---\\n\\n> There are some methods that ToVE should compare.\\n\\nThank you for highlighting these relevant works. We acknowledge their significance and appreciate your suggestion. In the revised version of the paper, we will include a comparison with the mentioned methods ([1] and [2]) to further enhance the completeness and quality of our paper.\"}",
"{\"comment\": \"Thank you for your response. I will raise the score, but I hope the author revises the manuscript to include these new results.\"}",
"{\"metareview\": \"Summary: This paper proposed ToVE (Transfer of Vision Experts) that transfer multi-expert knowledge to a vision encoder via a token-aware gating network and residual mechanism, which significantly reduced training data requirements.\", \"main_strengths\": \"(1) The approach of leveraging multiple vision encoders as experts and transfer the knowledge into a single vision encoder is novel. (2) Extensive experiments validated the effectiveness of ToVE. (3) The visualization of ToVE's routing maps is interesting and insightful to understand the machenism.\", \"major_weaknesses\": \"(1) Experiments incorporating ToVE with an LLM (e.g., LLaVA-style) was lacking in the original version, and were added during discussion. (2) The writing quality, including explanations, organization, and notations, has space to be improved. (3) The compared\\n\\nThis paper received four positive scores as final rating, i.e., 6, 6, 6, 6. The AC agreed with reviewers and recommend the paper as accept.\", \"additional_comments_on_reviewer_discussion\": \"The main concern addressed during discussion is incorporating ToVE with an LLM in LLaVA style. After the results were added, the reviewers acknowledged the effectiveness of ToVE with LLM and improved their scores.\"}",
"{\"comment\": \"Thank you for your response. I will keep my original score.\"}",
"{\"summary\": \"This paper presents a novel way to assimilate knowledge from different visual experts trained for different visual tasks into a pre-trained ViT for solving visual language tasks. The idea of expert knowledge assimilation might not be new, but it is not trivial to align depth, surface normal or edge information along with a clip-based ViT model for vision language tasks. The authors achieve it by a carefully constructed architecture pipeline, where the experts and the ViT model are frozen, a token-based gating network selects which expert to distil knowledge from into that visual token, and then a language decoder utilizes these modified tokens for the final task. They further go on to merge all the expert knowledge into their vision encoder to relieve using multiple experts during inference.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The idea of fusing knowledge from multiple experts into the vision encoder is good.\\n\\n2. The architecture design is novel, relating knowledge fusion to mixture of experts idea.\\n\\n3. Although similar to Prismer in idea, ToVE does have better results\\n\\n4. Ablations about types of knowledge merging, expert detachment are quite interesting and insightful\", \"weaknesses\": \"1. There are quite a few grammatical and sentence-construction mistakes throughout the paper, for e.g. \\\"The projection function of can be delineated as\\\" this sentence doesn't make sense. Also, the mapping function from d_k to d_lang should be about the feature dimension, since this is achieved by projection function MLPs, but it is written as token lengths, which is not the same as token feature dimension. Is this a writing mistake or the authors are changing token lengths, i.e., number of tokens?\\n\\n2. \\\"Different from MOEs, which commonly activates the expert with the top-1 routing score,\\\" this is not true. The \\\"original\\\" MoE paper (Shazeer et al) did not have top -1, mainly switch transformer has top-1, and sparse MoE has top-k. Also, the authors employ a load balancing loss exactly similar to the Shazeer et al paper. So, stating that is a bit misleading. \\n\\n3. After eq 6, it says t_clip and t_fuse, which does not even appear in the equation.\\n\\n4. The method requires token-specific output from each expert (eq 1). So, how are tokens obtained for an image from low-level vision experts?\\n\\n5. In Table 1 and 2, ToVE is not sota but its results are marked bold. (e.g. BLIPv2 CIDEr score in Table1, InstructBLIP POPE-R score in table 2). Since comparison is tough due to architecture modification, expert merging, it is essential to atleast show number of training parameters to understand the comparison validity\", \"questions\": \"1. The paper is not written that carefully. I would suggest the authors to properly rewrite the paper considering grammatical mistakes, notation inconsistencies, etc.\\n\\n2. How is a token-level knowledge extracted from low-level vision experts? This is not described properly either in main paper or appendix.\\n\\n3. What would be proper baselines to compare against, since BLIP and others do not use knowledge sharing from other experts? I think the tables need a bit of redesign.\\n\\nPlease address these comments as well as the weaknesses. I am inclined to accept this paper since I like the architecture but the paper needs quite a bit of rework.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We appreciate the reviewer's feedbacks. Here, we resolve each of your concerns below.\\n\\n---\\n\\n> There are quite a few grammatical and sentence-construction mistakes. The stating of top-1 routing is a bit misleading. \\n\\nWe sincerely apologize for the grammatical errors, notation inconsistencies, and sentence construction issues in our paper, as pointed out in your review.\\n\\nRegarding the mapping function from $d_k$ to $d_{lang}$, we intended to **refer to the \\\"token feature dimension\\\"**, as you correctly pointed out in the review. We have realized that the use of \\\"token lengths\\\" was a confusing misrepresentation. \\n\\nFor $t_{clip}$ and $t_{fuse}$, we are sorry for our mistakes. They should be ~$t_{vis}$ and $t_{vis}$, which **represents the the original vision tokens and the knowledge-transferred vision tokens**.\\n\\nWe acknowledge that the statement \\\"Different from MOEs, which commonly activates the expert with the top-1 routing score\\\" was an overclaim. As you rightly pointed out, the original MoE paper (Shazeer et al.) did not use top-1 routing and sparse MoEs commonly use top-k. We appreciate your feedback on this and will revise the this statement.\\n\\nWe will carefully revise the manuscript to address these kinds of issues comprehensively. Thank you for your valuable feedback and for bringing these matters to our attention.\\n\\n---\\n\\n> How is a token-level knowedge extracted from low-level vision experts? This is not described properly either in main paper or appendix.\\n\\nThanks for pointing this out. In We would clarify that we include the details of encoding the low-level information from these experts **in Line 320-323 (Main paper)** and **Line 730-734 (Appendix A2)**. Specifically, **the encoding process is close to the patch embedding operation** in the standard ViT architecture. These low-level labels are processed **using randomly initialized convolutional layers to encode their respective vision knowledge**. Each expert is equipped with five lightweight convolutional layers with a small [3 \\u00d7 3] kernel. In the revised version, we will incorporate more details into the main text to emphasize this point.\\n\\n---\\n\\n> What would be proper baselines to compare against, since BLIP and others do not use knowledge sharing from other experts? \\n\\nThank you for your thoughtful and constructive feedback. Our primary motivation in this paper is to **transfer visual knowledge of pre-trained vision experts to enable efficient vision-language learning**. The vision experts are a fundamental component of ToVE, where the objective is to utilize their expertise alongside small-scale datasets to achieve competitive performance. Therefore, we compared ToVE with models that do not incorporate knowledge sharing from external experts. We added the \\\"training samples\\\" as to emphasize the training (data) efficiency in Table 1-4.\\n\\nRegarding the concern about trainable parameters mentioned in your review, we would like to clarify that **ToVE is relatively parameter-efficient** compared to many works. We have supplemented the some analyses of \\\"trainable parameters\\\", as shown in the table below, which will be incorporated into the revised version of the paper.\\n\\n| Method | Trainable Parameters | Training samples | Nocaps (CIDER) |\\n| --------- | -------------------- | ---------------- | -------------- |\\n| ToVE_lite | ~80M | 3M | 108.2 |\\n| ToVE | ~100M | 3M | 112.5 |\\n| GIT | ~100M | 10M | 96.6 |\\n| BLIP | ~450M | 14M | 105.1 |\\n| BLIP | ~450M | 129M | 110.0 |\\n\\nOnce again, thank you for your valuable feedback, which has greatly contributed to improving the clarity and comprehensiveness of our work.\"}",
"{\"comment\": \"Dear reviewer, we want to kindly follow up on the submitted rebuttal. Given the importance of your feedback in refining and improving the work, we would greatly appreciate it if you could review the rebuttal at your earliest convenience.\"}",
"{\"summary\": \"This paper presents the ToVE (Transfer of Vision Experts) framework, leveraging pre-trained vision models to enhance vision-language learning efficiency by transferring expert knowledge via a token-aware gating network, resulting in competitive performance on vision-language tasks with significantly reduced data requirements.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The ToVE framework introduces a novel approach for efficient vision-language learning by utilizing a hub of pre-trained vision experts. This method promotes the effective transfer of knowledge, addressing the challenge of limited data availability in specialized domains.\\n\\n2. The experimental results demonstrate that ToVE achieves competitive performance across various vision-language tasks using significantly less training data compared to existing models. \\n\\n3. The paper includes visualizations of the gating network's routing decisions, illustrating how expert knowledge is allocated across different image regions. This enhances interpretability, showing how ToVE leverages expert knowledge in a token-specific manner to improve performance on complex visual tasks.\", \"weaknesses\": \"1. The process of routing expert knowledge to vision tokens, particularly the token-aware gating network, adds complexity. Although the authors propose methods for detaching low-contributing experts to improve efficiency, the initial setup and training remain computationally intensive, which may hinder practical application.\\n\\n2. The paper introduces a load-balancing loss to ensure a balanced use of experts, but the effectiveness of this loss in preventing over-reliance on certain experts is not extensively validated, leaving questions about how it affects the model\\u2019s performance and robustness.\\n\\n3. The paper emphasizes the efficiency benefits of transferring pre-trained expert knowledge, yet it lacks a detailed discussion on how ToVE handles potential domain mismatches between the knowledge of vision experts and specific downstream vision-language tasks, such as highly specialized applications.\", \"questions\": \"As shown in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
EMKZyZSl70 | DualContrast: Unsupervised Disentangling of Content and Transformations with Implicit Parameterization | [
"Mostofa Rafid Uddin",
"Min Xu"
] | Unsupervised disentanglement of content and transformation is significantly important for analyzing shape focused scientific image datasets, given their efficacy in solving downstream image-based shape-analyses tasks. The existing relevant works address the problem by explicitly parameterizing the transformation latent codes in a generative model, significantly reducing their expressiveness. Moreover, they are not applicable in cases where transformations can not be readily parametrized. An alternative to such explicit approaches is contrastive methods with data augmentation, which implicitly disentangles transformations and content. However, the existing contrastive strategies are insufficient to this end. Therefore, we developed a novel contrastive method with generative modeling, DualContrast, specifically for unsupervised disentanglement of content and transformations in shape focused image datasets. DualContrast creates positive and negative pairs for content and transformation from data and latent spaces. Our extensive experiments showcase the efficacy of DualContrast over existing self-supervised and explicit parameterization approaches. With DualContrast, we disentangled protein composition and conformations in cellular 3D protein images, which was unattainable with existing disentanglement approaches. | [
"Unsupervised Learning",
"Shape Analysis",
"Identifiability in Representation Learning",
"Disentangled Representation Learning",
"ML in Biology"
] | Reject | https://openreview.net/pdf?id=EMKZyZSl70 | https://openreview.net/forum?id=EMKZyZSl70 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"y6Yv9fG4P9",
"wYTYNvioi1",
"u6nU7PpaXm",
"tf1pso1yf1",
"tQ3kc5xXAt",
"sq9Lv4EePt",
"lrLBoyG0RL",
"VBXkU1rGG1",
"S1oiHuTVgW",
"Pn3jA7Ez2s",
"HOpfWEGETr",
"HMRXNoSJNF",
"FPssN0TEaW",
"DZKvBgvS6A",
"BKNugkTG8I",
"9AD4Oppgbg",
"6FszWy676a",
"21IylqJUwZ",
"1DFaIrPBUH"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1732784773789,
1733287987006,
1732813169620,
1732795732861,
1733119174530,
1732813538695,
1732808892985,
1733061799446,
1730964353940,
1734854908116,
1730587458718,
1733118719338,
1733118650389,
1732813056441,
1737523833487,
1732808603361,
1732789586192,
1732808927817,
1730121989051
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7350/Reviewer_UJ45"
],
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7350/Reviewer_aQPG"
],
[
"ICLR.cc/2025/Conference/Submission7350/Reviewer_B3mr"
],
[
"ICLR.cc/2025/Conference/Submission7350/Area_Chair_QVss"
],
[
"ICLR.cc/2025/Conference/Submission7350/Reviewer_UJ45"
],
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7350/Reviewer_aQPG"
],
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7350/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7350/Reviewer_aQPG"
]
],
"structured_content_str": [
"{\"title\": \"Global Response\", \"comment\": \"We thank the area chairs and reviewers for their efforts in reviewing our paper and providing suggestions. Their helpful comments have greatly enhanced our work.\\n\\nWe have revised the manuscript according to the reviewers' suggestions and marked them in blue in the new version of the manuscript. The main differences include:\\n\\n1. **We included an additional Fig. 6 demonstrating convincing application-specific evaluation results of DualContrast.** The figure clearly shows how DualContrast can identify subtle conformational changes from a protein mixture cryo-ET subtomogram dataset, highlighting its significance in real-world applications. \\n\\n2. **We included additional evaluation metrics in Table 1.** In the table, we also reported values for all the evaluation metrics for the protein subtomogram dataset.\\n\\n3. **We added a Discussions & Limitations section (Section 5)**, discussing when DualContrast is expected to work and when not. We observed that the method disentangles the transformations, causing small pixel-space changes, e.g., subtle conformational changes in proteins, viewpoint changes in LineMod, etc. Identifying subtle changes is vital in scientific image datasets, and our method applies to such cases. However, it is not expected to disentangle transformations causing significant pixel-space changes or transformations not present in the dataset, which is also not feasible in a completely unsupervised manner. We discussed these issues in the newly added section. \\n\\n4. As suggested by Reviewer UJ45, **we excluded the StarMen dataset** and all its results from the manuscript to maintain consistency of evaluation across all the datasets and the page limit. Moreover, all the reviewers regarded the dataset as insignificant, so we used the space to demonstrate highly significant subtomogram results (Fig. 6). \\n\\n5. **We changed Fig. 2 to make it more comprehensive and self-explanatory**. Consequently, we felt that the previous Fig. 3, which showed contrastive pair-making for a batch of MNIST digits, was optional in the main manuscript, so we moved it to the Appendix as Fig. 9.\"}",
"{\"title\": \"Clarifying Remarks\", \"comment\": \"Dear reviewers and chairs,\\n\\nWe again greatly appreciate your time and effort in reviewing our work and providing suggestions. Since the deadline for authors' comments is today, we would like to mention several clarifying remarks regarding the work. This may resolve several confusions regarding the work. \\n\\n**Type of contribution- Fundamental vs Incremental**: We view our work as a fundamental contribution rather than an incremental one. In cryo-EM/ET, many incremental works focus on tasks like particle picking or determining the location of protein complexes from raw images, aiming to improve state-of-the-art performance. In contrast, our work addresses a novel task: disentangling content and transformations where the transformations, such as conformational changes, lack a well-defined parametric form. This is similar to works like SpatialVAE [1], which introduced the task of disentangling content from 2D rotations and translations, and Harmony [2], which extended this to disentangling content from parameterized transformations, including 2D and 3D affine transformations. By addressing this new challenge, our work expands the boundaries of what is achievable in the field.\\n\\n**Takeaway from cryo-ET results**: The main takeaway from the cryo-ET part of this paper is the demonstration that protein complexes with varying compositions and conformations can be identified from a collection of cryo-ET subtomograms in an unsupervised manner- a capability that previous methods lacked. Earlier approaches were limited to either identifying a few distinct protein complexes from a collection of images or analyzing a few conformations of a single protein complex within a dedicated dataset. However, when presented with collections containing multiple distinct protein complexes and multiple conformations, these methods could, at best, identify only a subset of the complexes and failed entirely to distinguish their conformations. This limitation is clearly illustrated in Fig. 6.\\n\\n\\n**Apart from cryo-ET, use of simple datasets- MNIST and LineMod**: We set out to disentangle the composition (semantic content) and conformation (transformations causing subtle voxel-level changes) of protein complexes in cryo-ET subtomogram datasets by framing the problem as an unsupervised content-transformation disentanglement task. These datasets present protein complexes in diverse 3D poses, with varying compositions and conformations, making the task inherently complex. To address this, we adopted a strategy of starting with simpler datasets, such as MNIST and LineMod (please note starmen was excluded in the revised paper), which share similar problem contexts. This strategy is consistent with prior works. For instance, SpatialVAE [1] and Harmony [2], while targeting content-transformation disentanglement in cryo-EM and cryo-ET, first validated their methods on MNIST. Similarly, NAISR [3], designed for interpretable shape analysis in medical imaging, initially tested its approach on the starmen dataset to establish its effectiveness on simpler cases. Following this strategy, we demonstrated our method\\u2019s success on MNIST and LineMod before applying it to cryo-ET subtomograms for the final experiments. \\n\\n[1] Bepler, Tristan, et al. \\\"Explicitly disentangling image content from translation and rotation with spatial-VAE.\\\" Advances in Neural Information Processing Systems 32 (2019).\\n\\n[2] Uddin, Mostofa Rafid, et al. \\\"Harmony: a generic unsupervised approach for disentangling semantic content from parameterized transformations.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[3] Jiao, Yining, et al. \\\"$\\\\texttt {NAISR} $: A 3D Neural Additive Model for Interpretable Shape Representation.\\\" The Twelfth International Conference on Learning Representations.\\n\\n- ICLR 2025 Conference Submission7350 Authors\"}",
"{\"title\": \"Individual Response to Reviewer aQPG (Part 2)\", \"comment\": \"> \\u201cI also find the experimental results a bit weak. First, the datasets utilized in this work are very simple and results on them probably won\\u2019t guarantee their utility on real-world problems. Second, the metrics utilized on Table 1 are not particularly significant. Third, most of the results are qualitative and based on one or two images that can potentially be cherry picked.\\u201d\\n\\nWe want to iterate again that the problem does not concern real-world natural images, where disentangling transformations from content is not very useful. The datasets utilized in this work, particularly the cryo-ET dataset, perfectly fit the use case. For Table 1, we added metrics and results for the cryo-ET datasets. We also disagree that most results are based on one or two images. Due to the page limit, the unsupervised content-transformation transfer results for MNIST and LineMod in the main manuscript contain fewer (30-45) images; we provided similar results with more images in Appendix (Fig. 10 and Fig. 12). Also, the latent space visualizations (Fig. 5, Appendix Fig. 11) and the downstream subtomogram averaging (Fig. 6) uses information from all the images in the datasets.\", \"the_answers_to_your_questions_are_below\": \">\\u201dOn L147, the authors say that they model \\\"is highly effective in disentangling wide range of transformations from the content in various shape-focused image datasets by only using simple rotation for creating contrastive pairs since the representation for disentangled rotation generalizes over other shape transformations.\\\" Could they elaborate on this? Why is it the case? Where is it shown on the paper? How can we be sure it will work to other modalities besides the toy tasks tested?\\u201d\\n\\nWe have moderated the statement in the contributions section of the Introduction (L143-L144). We have demonstrated the disentanglement of these transformations throughout the results section of our paper (Table 1, Fig. 3-6, Fig. 10-12). As for other modalities, I think we mentioned this in previous answers and the Discussions & Limitations section. \\n\\n>\\u201dIs the VAE trained at the smae time as the contrastive losses? Since the VAE is used to generated samples for the CLs, how do training jointly vs training in two stages (VAE followed by CLs) change the performance?\\n\\nYes, the VAE is trained at the same time as the contrastive losses. We mentioned this in our method section and in the caption for Figure 2. We have tried VAE followed by CLs; however, that did not result in any disentanglement (SAP score was close to 0.0).\"}",
"{\"comment\": \"The authors have addressed most of my concerns in the general comment and the revised version of the manuscript. I have increased my score.\"}",
"{\"title\": \"Response Reminder to Reviewer B3mr\", \"comment\": \"Dear reviewer B3mr,\\n\\nWe have addressed your initial concerns with the revised manuscript and our global and individual responses above. Since the deadline for reviewer comments to authors is today, please let us know if you have further concerns we can address. If you have no more concerns, we would greatly appreciate your reconsidering your initial score.\"}",
"{\"title\": \"Response to Official Comment by Reviewer aQPG\", \"comment\": \"Please look into the individual responses to see if they address your concerns or confusion. Even if you are determined to stick to your initial rating, we respect your judgment. Nevertheless, we would very much appreciate if you could be specific about the \\\"better metrics\\\", \\\"better comparison\\\", and \\\"more datasets\\\" you are referring to.\\n\\nMoreover, the paper was submitted to the primary area of \\\"applications to physical sciences (physics, chemistry, biology, etc.).\\\" We would greatly appreciate it if you took this issue into account for your final judgment.\"}",
"{\"title\": \"Individual Response to Reviewer UJ45 (Part 1)\", \"comment\": \"Dear Reviewer UJ45\\n\\nWe very much appreciate your effort in thoroughly reviewing our paper and providing valuable suggestions. We are glad that you find our work \\u201coriginal\\u201d with a \\u201cclear\\u201d explanation of methods, our choice of baselines \\u201crelevant,\\u201d and overall our method to have \\u201cpotential.\\u201d\\n\\n**We greatly appreciate you reviewing the global response and increasing your score.** We also prepared a point-by-point response to your concerns in case you find some remaining concerns. The response is as follows:\\n\\n>\\u201dThe explanation of the method in the abstract and introduction is especially unclear. This is also a problem because Figure 2 fails to properly and intuitively show the method. Reading the method section explains this more. To improve, I would suggest clearly highlighting the role of the latent space in the creation of the positive pair that would otherwise be impossible. This could be done similarly to how it was done in Figure 3. Figure 2 would then become useful. Additionally, Figure 2 lacks proper annotations such as labeling of all elements present, and proper caption explaining what happens in the figure in a more complete way. There is some inconsistency in how things are called. In the figure, style, and content are mentioned, however, in the text it is clear that \\\"style\\\" is supposed to be \\\"transformation\\\", please pick one and stick with it in the whole manuscript, either one would suffice, however, transformation is likely to be more accurate.\\u201d\\n\\nThanks for pointing out this issue and providing suggestions. We have modified Fig. 2 accordingly (please see our revised manuscript). Now, Fig. 2 clearly visualizes the contrastive pair creation strategy. Consequently, we felt the previous Fig. 3 showing contrastive pair creation with a batch of MNIST digits to be optional and moved it to the Appendix as Fig. 9. We also added more details in the caption explaining the figure. We have resolved the style-transformation inconsistency issue and used the term transformation consistently in the Figure. \\n\\n>\\u201d The contributions are a bit bold. The first contribution, especially, is more context for the work than a contribution and could be removed entirely. Please consider reworking the contributions to be more reflective of the actual content.\\u201d\\n\\nThanks for your feedback. Based on your suggestion, we reworked the contributions (please see the revised manuscript) and removed the first contribution.\\n\\n>\\u201dThe related work section should expand a bit more on the protein part, which is currently very unclear for somebody who is not a practitioner. Please provide more examples, even referring to the appendix to understand the data and the context better.\\u201d\\n\\nDue to the page limit, we could not elaborate on the related work in the main manuscript. We provided additional discussions on Appendix Section A1. According to your suggestion, we referred to this section from the related work section of our main manuscript. \\n \\n> \\u201cThe method section has a few mistakes and the explanation is very wordy, which makes it hard to follow. I wrote a few observations in the questions section of this review.\\u201d\\n\\nWe have addressed your questions. \\n\\n> \\u201cThe manuscript should include the limitations of this method, especially regarding the latent space-based approach to creating positive transformation pairs. For example, the limitations should address whether this approach could be extended to real-world datasets or whether this approach should be limited to specific types of datasets.\\u201d\\n\\nThanks for your suggestion. We have added a Discussions & Limitations section (Section 5) to the main manuscript and discussed your concerns in that section.\"}",
"{\"comment\": \"I acknowledge and appreciate the authors' further responses. After reading the rebuttal, I will keep my score, as many of my concerns are not entirely addressed.\\n\\nIndeed I meant CryoET when I wrote CryoEM on my initial post. Independently, CryoEM/CryoET/related technologies are vast research domains with many important applications in biology. What I meant on my initial post is that the experiments conducted on the paper on this application are simple/toy tasks that does not reflect real use-cases of those technologies. Moreover, the main takeaway result on CryoET experiments was a few UMAP plots (Fig 5) or a few qualitative figures (Fig 6). \\n\\nBy \\\"better datasets\\\" I mean something that is not MNIST/LineMod/StarMen. If the focus would be on computer vision applications, the the authors should focus on dataset that are relevant to that community (those are not). If the focus is \\\"applications to physical sciences\\\", maybe the paper could really focus on that topic instead of mostly very toy computer vision datasets. For example, the authors could show results on more than only 3 protein domains, perhaps use experimental data instead of simulated ones (if that is possible), or find some other dataset related to physical sciences to experiment with.\\n\\nBy \\\"better metrics/comparisons\\\", I mean not only show one UMAP figure and one qualitative illustration as validation of the proposed method---that is simply not enough. I am not an specialist on CryoET, but I am sure the community has many metrics to evaluate quality of their approaches. The D_score is a good start, but that metric alone does not really validates the method in any way.\"}",
"{\"summary\": \"This paper proposes an unsupervised disentangling method to disentangle content and transformation of the input. Specifically, this paper first proposes two conditions that disentanglement of content and transformation should satisfy. Then, this paper proposes a method to construct positive and negative samples with respect to both content and transformation. The key idea is to utilize a variational autoencoder to construct these samples. The experiments are conducted on four datasets, i.e., three of them (mnist, linemod, and starmen) are pure images, and one is protein subtomogram. One quantitative result and several qualitative results are shown to prove the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper proposes well-defined conditions for the disentanglement of content and transformation.\\n2. The experiments are conducted on four datasets, and comprehensive qualitative results are shown.\", \"weaknesses\": \"The main concern of this paper is evaluation, which is insufficient and less significant.\\n1. The first three datasets (mnist, linemod, starmen) are somehow toy datasets, which is less significant in real-world applications.\\n2. I agree that protein conformation is one meaningful real-world application, but other than map visualizations, it fails to produce convincing evaluation results.\\n3. There lacks some widely used evaluating metrics in Table 1 to demonstrate the application of the disentanglement.\\n4. This paper also does not provide comparisons with other baseline methods or state-of-the-art methods.\\n\\n--------------------------------------\", \"post_rebuttal\": \"Thanks for the authors' response. The revision includes more quantitative results and qualitative comparisons, which is appreciated. These additions partially resolved my concerns. However, I noticed that these compared methods are still up to date in 2022, which cannot be considered state-of-the-art to some extent. \\n\\nAnother issue to mention is that the metrics D(c|c) and D(c|z) were used in the initial submission, and the protein experiments are the most important one (in my personal view); it is usually discouraged not to report the main quantitative results of the critical experiments in the initial submission, but added during rebuttal.\\n\\nConsidering the rebuttal and revision, I raised my rating to 5. Please use it sparsely.\", \"questions\": \"Please refer to the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper utilizes a VAE with dual latent spaces for disentangling of content and transformations of cellular 3D protein images. Apart from standard VAE loss, the novel part is contrastive learning loss based on positive and negative pairs in terms of content and transformation. The method here is unsupervised, and those positive and negative pairs are generated without supervision or labels. Clustering of content and transformation codes clearly shows the effectiveness of the method as shown in Fig. 6 in the revised version.\\nHowever, reviewers were concerned about real-world impact on downstream tasks. Clustering of latents is another way of visualization for the latents, but not really an application with a profound impact. (There might be a knowledge gap for the reviewers and AC who do not work on biological science).\\nAnother weakness is the lack of comparison to more recent methods on disentangled representation learning as mentioned by the reviewer. Another metric was added in the rebuttal, but overall the amount of quantitative results seems insufficient to be convincing.\\nThe way to construct positive pairs and negative pairs is highly unreliable. Randomly selecting pairs of samples can lead to positive pairs with the same content. Also, it is not clear how to bootstrap the joint network to produce synthetic images with the same content but different transformations, in particular at the beginning of training when the network cannot produce images with high fidelity.\\n\\nThe authors are encouraged to improve the empirical significance of the method and resubmit it to the next conference or to a more domain-specific venue where the impact can be better appreciated.\", \"additional_comments_on_reviewer_discussion\": \"Reviwer B3mr and aQPG were concerned about lack of real-world application of disentanglement and lack of evaluation.\\nDuring the rebuttal period, the authors reported one more metric and included clustering as an application in the revised version.\"}",
"{\"summary\": \"Unsupervised disentanglement of transformations and content is a challenging task that was previously approached primarily through using separate ad-hoc transformation methods, or by self-supervised contrastive-based methods. Ad-hoc transformations suffer from being limited to the given parameterization chosen, while self-supervised methods do not tackle this disentanglement problem directly. In this work, DualContrast is proposed, which consists of a VAE with additional contrastive losses designed to disentangle content and transformation. The hardest challenge is obtaining positive pairs of samples with respect to transformations: changing the content while keeping the transformation constant. In this work, this has been done by decoding two random samples from the prior of the transformation latent space while feeding different permutations of the content latent representation to obtain similar transformations with different content. The method is applied to MNIST, LineMod, Starmen Shapes, and Cryo-ET subtomograms with positive results.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Originality.\\nIn this work, a novel method to address the problem of creating positive pairs of transformations under content change has been proposed. The core of this work is original.\\n\\nQuality.\\nThe method proposed was evaluated on a sufficient number of datasets. Although not very complex, they could suffice in showing the potential for this approach. The baselines chosen are also relevant to the method proposed.\\n\\nClarity.\\nThe figures in the experiment section allow for a quick qualitative assessment of the performance of the methods. The method explanation is quite clear.\\n\\nSignificance.\\nThis work and the proposed method have shown some potential for successful applications.\", \"weaknesses\": \"The explanation of the method in the abstract and introduction is especially unclear. This is also a problem because Figure 2 fails to properly and intuitively show the method. Reading the method section explains this more. To improve, I would suggest clearly highlighting the role of the latent space in the creation of the positive pair that would otherwise be impossible. This could be done similarly to how it was done in Figure 3. Figure 2 would then become useful. Additionally, Figure 2 lacks proper annotations such as labeling of all elements present, and proper caption explaining what happens in the figure in a more complete way. There is some inconsistency in how things are called. In the figure, style, and content are mentioned, however, in the text it is clear that \\\"style\\\" is supposed to be \\\"transformation\\\", please pick one and stick with it in the whole manuscript, either one would suffice, however, transformation is likely to be more accurate.\\n\\nThe contributions are a bit bold. The first contribution, especially, is more context for the work than a contribution and could be removed entirely. Please consider reworking the contributions to be more reflective of the actual content.\\n\\nThe related work section should expand a bit more on the protein part, which is currently very unclear for somebody who is not a practitioner. Please provide more examples, even referring to the appendix to understand the data and the context better.\\n\\nThe method section has a few mistakes and the explanation is very wordy, which makes it hard to follow. I wrote a few observations in the questions section of this review.\\n\\nThe manuscript should include the limitations of this method, especially regarding the latent space-based approach to creating positive transformation pairs. For example, the limitations should address whether this approach could be extended to real-world datasets or whether this approach should be limited to specific types of datasets. \\n\\nThe experiments lack in quantitative results. Although disentanglement is very hard to measure, given the ability to choose datasets, it would be much more convincing to have datasets where a quantitative assessment is possible either in the form of direct supervision (similar to what the disentanglement metric is currently doing), or through some downstream tasks where the disentanglement would be useful. Such tasks could be segmentation, or visual question answer.\\nThe choice of the \\\"human deformation\\\" as a dataset is very confusing, and the results reported are also very underwhelming. Although the generated shapes are better, the data appears to be very trivial, so some more information on the training and the difficulty of fitting such samples would be more convincing. The number of parameters, ablation performed, and results from more baselines would be a step in the right direction. It is especially important to keep including all baselines for all datasets used, the results on human deformation appear to be unfinished. If so, it would have been better to simply exclude the dataset from the manuscript.\\nAdditionally, plots of the latent space are only available for the cellular dataset\", \"questions\": \"In the method section, condition 1 is very confusing. I think it was meant to be \\\"for all $T \\\\in T$ and $x \\\\in X, h_c(T(x)) = h_c(x)$\\\", but please correct me if I misunderstood this.\\n\\nIn the method section, many terms are used seemingly interchangeably, such as \\\"latent space\\\", \\\"factor\\\", \\\"representation\\\", \\\"transformation\\\", \\\"content\\\". Please clarify these terms. For example, at line 206, \\\"transformation\\\" is used, however, I think it was meant to be \\\"transformation representation\\\" or \\\"transformation factor\\\", unless I misunderstood.\\n\\nThere are a few grammatical and syntactical mistakes, such as inconsistencies in the use of uppercase and lowercase, sometimes writing \\\"shape focused\\\" while other times \\\"shape-focused\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Regarding misunderstandings of the work (Response to reviewer aQPG): Part 2\", \"comment\": \"**References**\\n\\n[1] Bepler, Tristan, et al. \\\"Explicitly disentangling image content from translation and rotation with spatial-VAE.\\\" Advances in Neural Information Processing Systems 32 (2019).\\n\\n[2] Uddin, Mostofa Rafid, et al. \\\"Harmony: a generic unsupervised approach for disentangling semantic content from parameterized transformations.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[3] Levy, Axel, et al. \\\"Amortized inference for heterogeneous reconstruction in cryo-em.\\\" Advances in neural information processing systems 35 (2022): 13038-13049.\\n\\n[4] Zeng, Xiangrui, et al. \\\"High-throughput cryo-ET structural pattern mining by unsupervised deep iterative subtomogram clustering.\\\" Proceedings of the National Academy of Sciences 120.15 (2023): e2213149120.\\n\\n[5] Zhong, Ellen D., et al. \\\"Reconstructing continuous distributions of 3D protein structure from cryo-EM images.\\\" International Conference on Learning Representations.\\n\\n[6] Zhong, Ellen D., et al. \\\"CryoDRGN: reconstruction of heterogeneous cryo-EM structures using neural networks.\\\" Nature methods 18.2 (2021): 176-185.\\n\\n[7] Powell, Barrett M., and Joseph H. Davis. \\\"Learning structural heterogeneity from cryo-electron sub-tomograms with tomoDRGN.\\\" Nature Methods (2024): 1-12.\\n\\n[8] Zhong, Ellen D., et al. \\\"Cryodrgn2: Ab initio neural reconstruction of 3d protein structures from real cryo-em images.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\\n\\n[9] Jiao, Yining, et al. \\\"$\\\\texttt {NAISR} $: A 3D Neural Additive Model for Interpretable Shape Representation.\\\" The Twelfth International Conference on Learning Representations.\"}",
"{\"title\": \"Regarding misunderstandings of the work (Response to reviewer aQPG): Part 1\", \"comment\": \"Thanks for your detailed response and suggestions. We very much appreciate your time. Nevertheless, we think you have a few misunderstandings regarding our work, contribution, and qualitative evaluation. We have clarified them below:\\n\\n\\n**Type of contribution- Fundamental vs Incremental:** We view our work as a fundamental contribution rather than an incremental one. In cryo-EM/ET, many incremental works focus on tasks like particle picking or determining the location of protein complexes from raw images, aiming to improve state-of-the-art performance. In contrast, our work addresses a novel task: disentangling content and transformations where the transformations, such as conformational changes, lack a well-defined parametric form.\\nThis is similar to works like SpatialVAE [1], which introduced the task of disentangling content from 2D rotations and translations, and Harmony [2], which extended this to disentangling content from parameterized transformations, including 2D and 3D affine transformations. By addressing this new challenge, our work expands the boundaries of what is achievable in the field.\\n\\n**Takeaway from cryo-ET results:** The main takeaway from the cryo-ET part of this paper is the demonstration that protein complexes with varying compositions and conformations can be identified from a collection of cryo-ET subtomograms in an unsupervised manner- a capability that previous methods lacked. Earlier approaches were limited to either identifying a few distinct protein complexes from a collection of images or analyzing a few conformations of a single protein complex within a dedicated dataset. However, when presented with collections containing multiple distinct protein complexes and multiple conformations, these methods could, at best, identify only a subset of the complexes and failed entirely to distinguish their conformations. This limitation is clearly illustrated in Fig. 6.\\n\\n**Qualitative Evaluation:** Regarding the qualitative evaluation, the only way to identify the presence of a protein complex in a collection of subtomogram images is to perform clustering, then subtomogram averaging, and observe the subtomogram averaging result. This is precisely what we have done (in Fig 6). If you look into the relevant works [3-8] in cryo-EM and cryo-ET domains, they have all done the same. \\n\\nMoreover, the evaluation was performed using the entire dataset; we inferred the latent codes for all the images in the dataset, performed GMM clustering, and then performed subtomogram averaging for each cluster. Fig. 6 shows the subtomogram averaging results. Fig. 5 shows the UMAP of the latent codes for all the images in the dataset. We did not cherry-pick a few images and show them, which would also be meaningless in this scenario. \\n\\n**Quantitative Evaluation:** One possible quantitative evaluation is to demonstrate how predictive each latent code is for the ground truth factor and to what extent they are separate. In the revised version, we have performed these quantitative evaluations for our cryo-ET dataset (see Table 1, blue colored part) with D_score and SAP score respectively. \\n\\n**Use of MNIST and LineMod:** We set out to disentangle the composition (semantic content) and conformation (transformations causing subtle voxel-level changes) of protein complexes in cryo-ET subtomogram datasets by framing the problem as an unsupervised content-transformation disentanglement task. These datasets present protein complexes in diverse 3D poses, with varying compositions and conformations, making the task inherently complex.\\nTo address this, we adopted a strategy of starting with simpler datasets, such as MNIST and LineMod (please note starmen was excluded in the revised paper), which share similar problem contexts. This strategy is consistent with prior works. For instance, SpatialVAE [1] and Harmony [2], while targeting content-transformation disentanglement in cryo-EM and cryo-ET, first validated their methods on MNIST. Similarly, NAISR [9], designed for interpretable shape analysis in medical imaging, initially tested its approach on the starmen dataset to establish its effectiveness on simpler cases. Following this strategy, we demonstrated our method\\u2019s success on MNIST and LineMod before applying it to cryo-ET subtomograms for the final experiments.\\n\\n(References in Part 2)\"}",
"{\"title\": \"Individual Response to Reviewer aQPG (Part 1)\", \"comment\": \"Dear Reviewer aQPG,\\n\\nPlease look into the individual responses to your initial comments to find out if it addresses your concern. **We did not clarify all your concerns in the global response since the global response only reflects the major changes in the manuscript, not your individual concerns.** \\n\\n> \\u201cThe author often mention that the work focus on \\\"shape-focused real-world images\\\", but they only applied in very simplified, toysh settings, very far from \\\"real-world images\\\". Even the CryoEM task is a very simplified task.\\u201d\\n\\nWe think you are confusing the real world with \\u2018natural images\\u2019. The shape-focused scientific images are also \\u2018real-world\\u2019; however, they are far from natural images found in ImageNet. Our revised manuscript mostly replaced \\u2018scientific images\\u2019 with \\u2018real-world\\u2019 to remove the confusion. In most shape-focused scientific image datasets, the transformations include the subtle changes in the pixel space, which we aimed to disentangle from the content. And as you mentioned cryoEM as a simplified task, I think you are confusing cryoEM with cryoET. CryoET contains 3D images of protein complexes with different identities and conformations, whereas cryoEM contains 2D images of protein complexes with the same identity with the same or different conformations. So, for cryoEM, disentangling, conformation, and identity is simpler, but not for cryoET. \\n\\n> \\u201cThe choice of positive/negative samples for each factor is very ad-hoc. The paper lacks explanation and empirical validation on why this choice makes sense versus others.\\u201d\\n\\nThe problem we are targeting is specific to unsupervised content and transformation disentanglement without explicit parameterization of the transformation. Consequently, the choice of positive/negative samples for the content and transformation factors is also specific to these factors. In the ablation study (L517-L529) in the main manuscript and Appendix Sec A4.4, we empirically validated this choice. We have mentioned that using contrastive factors for only one factor optimizes the reconstruction through the other factor, making the latter factor capture all the information of the data (degenerate solution). We have mentioned whether using only positive or negative pairs for both codes is sufficient for disentanglement. We found that both leads to suboptimal disentanglement. If only negative pairs are used, only rotation is disentangled. If only positive pairs are used, then the transformation code becomes uninformative of the data, similar to the degenerate solution. We mentioned that we did not use the contrastive losses used in SimCLR and MoCo since they resulted in feature suppression (see Appendix Section A2.2). We demonstrated empirical evidence on why we chose rotation (Appendix Section A2.3, Figure 8, Table 2, L286 in the main manuscript. \\n\\nDo you have any alternate choices in mind? If so, please mention them so we can have a logical discussion about whether the alternate choice could have been used. \\n\\n\\n> \\u201cI found very strange the choice of using VAE generated samples as data to train the contrastive loss. This idea of using generated samples to train a model is not well understood. This approach might have worked in the very simplified tasks tested on the paper, but It is very unlikely that the proposed model would work on any real-world dataset.\\u201d\\n\\nWe use a bi-latent space VAE. We use one latent space for content and another for transformation. To create positive contrastive pairs for transformation, we use the generated samples from the transformation latent space using a prior Gaussian distribution. Since the reconstruction and contrastive losses are used simultaneously, the losses concerning other contrastive pairs make the data samples with subtle conformational changes closer to the transformation latent space. When we sample them to create positive contrastive pairs and encourage their similarity in the latent space, the samples with subtle conformational changes become closer in the transformation latent space. Thus, transformation captures the subtle pixel space changes and the in-plane rotation in the data. However, this phenomenon was not observed when we used other than in-plane rotation to create contrastive pairs. We regard this as an empirical finding of our manuscript. \\n\\nAgain, we respectfully disagree that the tasks in this paper were very simplified. The cryo-ET scenario was not simple, given multiple levels of heterogeneity, pose and shift confounding, and high noise in the dataset. Given the results on MNIST, LineMod, and cryo-ET subtomograms, the proposed method is likely to work similarly and disentangle transformations that cause small pixel-space changes from scientific image datasets. However, as mentioned earlier, real-world natural images are very different, and our study does not consider those images.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"I thank the authors for their rebuttal. However, the rebuttal did not address most of my initial concerns (related to model design and empirical evaluation). I would suggest the authors to really improve experiments and evaluation metrics (eg, better metrics, better comparison to SOTA, more relevant datasets) and re-submit to a next conference. I keep my rating.\"}",
"{\"title\": \"Individual Response to Reviewer B3mr (Updates in Evaluation)\", \"comment\": \"Dear reviewer B3mr,\\n\\nWe thank you for finding our qualitative results comprehensive and our proposed conditions for content-transformation disentanglement to be well-defined.\", \"please_see_our_response_to_your_concerns_below\": \">\\u201dThe first three datasets (mnist, linemod, starmen) are somehow toy datasets, which is less significant in real-world applications.\\u201d\\n\\nWe first evaluated our method and the baselines for these datasets to assess whether they could disentangle conformations from compositions in our protein dataset. Although toy-like, these datasets feature transformations akin to protein conformations, with subtle pixel-level changes. Moreover, the baseline methods- Harmony, SpatialVAE, and VITAE- were all tested on the MNIST dataset, so we also started our experiments with MNIST. LineMod was a reasonable RGB single-object dataset to evaluate since the size of the dataset and the type of transformations are comparable to protein datasets. Starmen may be a bit redundant; consequently, we excluded this dataset in our revised manuscript (as mentioned in the global response). \\n\\n\\n>\\u201dI agree that protein conformation is one meaningful real-world application, but other than map visualizations, it fails to produce convincing evaluation results.\\u201d\\n\\nWe have provided additional results (Fig. 6 in the revised manuscript) showing how DualContrast can identify distinct conformations of proteins with subtle conformational changes from a protein mixture cryo-ET subtomogram dataset. This could not be achieved with any other method. Given the importance of identifying distinct conformations of proteins in diagnosis and drug discovery, this is undoubtedly a significant contribution. Please go through Fig. 6 and its description (in blue text) from line 490 to line 502 in the revised manuscript. \\nWe also included quantitative disentanglement results for the protein dataset in Table 1. However, the qualitative results (Fig. 5 and Fig. 6) are more important for this use case.\\n\\n> \\\"There lacks some widely used evaluating metrics in Table 1 to demonstrate the application of the disentanglement.\\\"\\n\\nWe added an additional evaluation metric (SAP score) in Table 1, which demonstrated the separateness of the latent codes, apart from their informativeness (which is measured with $D_{score}$). We agree that there are many evaluating metrics for disentanglement, but as we mentioned in our manuscript (line 342), all these evaluation metrics have been found to be highly correlated (Locatello et al. [1]). The relevant baseline works, e.g., VITAE [2], Harmony [3], etc., used only these metrics to measure content and transformations\\u2019 disentanglement. Consequently, we use only $D_{score}$ and SAP score as evaluation metrics. Since the other metrics are highly correlated, the performance is expected to be similar with different metrics. \\n\\n> \\u201cThis paper also does not provide comparisons with other baseline methods or state-of-the-art methods.\\u201d\\n\\nWe respectfully disagree with this statement. Throughout our result section, in Table 1, in Fig. 3, Fig. 4, Fig. 5, Fig. 6, in Appendix Fig. 10, Fig. 11, Fig. 12 (numbers based on the revised manuscript)), we provided extensive comparisons with the baseline methods. Our baseline methods include the state-of-the-art unsupervised content-transformation disentangling methods, e.g., Harmony [3], SpatialVAE [4], VITAE [2] , etc. We did not include general disentangled representation learning methods like $\\\\beta$-TC-VAE, Factor-VAE, etc, as they are not specifically designed for content and transformation disentanglement. Moreover, previous works [2,3,4] found that these approaches perform poorly in disentangling content and transformations with their generic strategy. \\n\\nIf you have any method in mind that you want us to compare with, please let us know specifically. We would be happy to compare those methods through experimentation or logical discussion. \\n\\n\\n**References**\\n\\n[1] Locatello, Francesco, et al. \\\"Challenging common assumptions in the unsupervised learning of disentangled representations.\\\" international conference on machine learning. PMLR, 2019.\\n\\n[2] Skafte, Nicki, and S\\u00f8ren Hauberg. \\\"Explicit disentanglement of appearance and perspective in generative models.\\\" Advances in Neural Information Processing Systems 32 (2019).\\n\\n[3] Uddin, Mostofa Rafid, et al. \\\"Harmony: a generic unsupervised approach for disentangling semantic content from parameterized transformations.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[4] Bepler, Tristan, et al. \\\"Explicitly disentangling image content from translation and rotation with spatial-VAE.\\\" Advances in Neural Information Processing Systems 32 (2019).\"}",
"{\"title\": \"Individual Response to Reviewer UJ45 (Part 2)\", \"comment\": \">\\u201dThe experiments lack in quantitative results. Although disentanglement is very hard to measure, given the ability to choose datasets, it would be much more convincing to have datasets where a quantitative assessment is possible either in the form of direct supervision (similar to what the disentanglement metric is currently doing), or through some downstream tasks where the disentanglement would be useful. Such tasks could be segmentation, or visual question answer. The choice of the \\\"human deformation\\\" as a dataset is very confusing, and the results reported are also very underwhelming. Although the generated shapes are better, the data appears to be very trivial, so some more information on the training and the difficulty of fitting such samples would be more convincing. The number of parameters, ablation performed, and results from more baselines would be a step in the right direction. It is especially important to keep including all baselines for all datasets used, the results on human deformation appear to be unfinished. If so, it would have been better to simply exclude the dataset from the manuscript. Additionally, plots of the latent space are only available for the cellular dataset.\\u201d\\n\\nWe added additional quantitative results in Table 1. We included a new metric called SAP score and defined it in the evaluation part. Given the insignificance of human deformation in the context, we excluded them from our revised manuscript. We calculated the metrics across all baselines of all the remaining datasets (Please see Table 1). Given the page limit, latent space plots for only the protein dataset were provided in the main manuscript (Fig. 5), where it has the most significance. We included latent space plots for other datasets in Appendix Sec A4.\", \"the_answers_to_your_questions_are_below\": \">Q: \\u201cIn the method section, condition 1 is very confusing. I think it was meant to be \\\"for all and \\\", but please correct me if I misunderstood this.\\u201d\\n\\nYes, you are correct. To remove confusion, we updated the wording in the two conditions in Section 3.1. Instead of \\u201cfor any $x \\\\in \\\\mathcal{X}$,\\u201d we used \\u201c$\\\\forall x \\\\in \\\\mathcal{X}$\\u201d to be consistent with our wording.\\n\\n>Q: \\u201cIn the method section, many terms are used seemingly interchangeably, such as \\\"latent space\\\", \\\"factor\\\", \\\"representation\\\", \\\"transformation\\\", \\\"content\\\". Please clarify these terms. For example, at line 206, \\\"transformation\\\" is used, however, I think it was meant to be \\\"transformation representation\\\" or \\\"transformation factor\\\", unless I misunderstood.\\n\\nThanks for pointing this out. We have resolved this confusion in the revised manuscript. In the revised manuscript, we only used factor when referring to the ground truth generative factor of the data. So transformation factor means the actual transformation generative factor of the data. On the other hand, when referring to the encoder-predicted content or transformation, we use the word latent codes or code. We use the term latent space when we refer to all the latent codes for the entire data space. When we only say \\u201ccontent\\u201d or \\u201ctransformation,\\u201d we mainly refer to the factor.\\n\\n>Q: \\u201cThere are a few grammatical and syntactical mistakes, such as inconsistencies in the use of uppercase and lowercase, sometimes writing \\\"shape focused\\\" while other times \\\"shape-focused\\\".\\n\\nThanks for bringing this issue to our attention. We have corrected the inconsistencies as much as possible. We also changed all \\u201cshape-focused\\u201d in the previous manuscript to \\u201cshape-focused\\u201d in the revised manuscript. In the revised manuscript, we consistently used \\u201cshape focused.\\u201d\"}",
"{\"summary\": \"In this paper, the authors propose a model that learn unsupervised representation for \\\"shape-focused images\\\". In particular, their method, DualContrast, learn to disentangle \\\"content\\\" and \\\"transformation\\\" in an unsupervised fashion. The model is trained with an a combination of 2 contrastive losses (one for context, one for transformations) and a VAE loss (the VAE is used to sample positive samples for the CL of transformations). The authors show results of the proposed model on multiple small/toysh datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written and easy to follow\", \"The idea of disentangling features is an important problem in many applications of machine learning\", \"The proposed approach is simple and well motivated\"], \"weaknesses\": [\"The author often mention that the work focus on \\\"shape-focused real-world images\\\", but they only applied in very simplified, toysh settings, very far from \\\"real-world images\\\". Even the CryoEM task is a very simplified task.\", \"The choice of positive/negative samples for each factor is very ad-hoc. The paper lacks explanation and empirical validation on why this choice makes sense versus others.\", \"I found very strange the choice of using VAE generated samples as data to train the contrastive loss. This idea of using generated smples to train a model is not well understood. This approach might have worked in the very simplified tasks tested on the paper, but It is very unlikely that the proposed model would work on any real-world dataset.\", \"I also find the experimental results a bit weak. First, the datasets utilized in this work are very simple and results on them probably wont guarantee their utility on real-world problems. Second, the metrics utilized on Table 1 are not particularly significant. Third, most of the results are qualitative and based on one or two images that can potentially be cherry picked.\"], \"questions\": [\"On L147, the authors say that they model \\\"is highly effective in disentangling wide range of transformations from the content in various shape-focused image datasets by only using simple rotation for creating contrastive pairs since the representation for disentangled rotation generalizes over other shape transformations.\\\" Could they elaborate on this? Why is it the case? Where is it shown on the paper? How can we be sure it will work to other modalities besides the toy tasks tested?\", \"Is the VAE trained at the smae time as the contrastive losses? Since the VAE is used to generated samples for the CLs, how do training jointly vs training in two stages (VAE followed by CLs) change the performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
EM93t94zEi | Learning Spatial-Semantic Features for Robust Video Object Segmentation | [
"Xin Li",
"Deshui Miao",
"Zhenyu He",
"Yaowei Wang",
"Huchuan Lu",
"Ming-Hsuan Yang"
] | Tracking and segmenting multiple similar objects with distinct or complex parts in long-term videos is particularly challenging due to the ambiguity in identifying target components and the confusion caused by occlusion, background clutter, and changes in appearance or environment over time. In this paper, we propose a robust video object segmentation framework that learns spatial-semantic features and discriminative object queries to address the above issues. Specifically, we construct a spatial-semantic block comprising a semantic embedding component and a spatial dependency modeling part for associating global semantic features and local spatial features, providing a comprehensive target representation. In addition, we develop a masked cross-attention module to generate object queries that focus on the most discriminative parts of target objects during query propagation, alleviating noise accumulation to ensure effective long-term query propagation. The experimental results show that the proposed method sets new state-of-the-art performance on multiple data sets, including the DAVIS2017 test (\textbf{87.8\%}), YoutubeVOS 2019 (\textbf{88.1\%}), MOSE val (\textbf{74.0\%}), and LVOS test (\textbf{73.0\%}), which demonstrate the effectiveness and generalization capacity of the proposed method. We will make all the source code and trained models publicly available. | [
"Video Object Segmentation",
"Spatial-Semantic Feature",
"Long-Term",
"Discriminative Object Queries"
] | Accept (Poster) | https://openreview.net/pdf?id=EM93t94zEi | https://openreview.net/forum?id=EM93t94zEi | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wa2MSTufFf",
"t5NO3p8v7b",
"inS3YmUAUf",
"gG73LDb67R",
"ehp68y5Mmq",
"ebxUO0EnCU",
"dm43fcrBCE",
"auWIAZZ9Iz",
"afJNxu5cmw",
"W1lI4LrGoA",
"UbUWdnnEZt",
"U5mMv6gmps",
"TimDwoIpzR",
"QMKAMFTGuX",
"PQZoOi88TE",
"NEHZmsD5NJ",
"MKrS4v8CM5",
"JCyP6E4bax",
"Fzr5KIMluQ",
"FdA28PS3Lx",
"Dy5GEjlJLZ",
"BrYKLeS6Tq",
"8snovc2hwQ",
"4ZQa1x3HWE"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review"
],
"note_created": [
1732550207434,
1732598469182,
1732027633855,
1732346218177,
1732598486234,
1732502197751,
1730555928322,
1732028207622,
1732029479586,
1732598418893,
1730699958919,
1732345352944,
1732345346260,
1730524123483,
1737523454841,
1732503255204,
1732028897753,
1732026038227,
1732345349821,
1732493530820,
1732029087487,
1732345355517,
1734776622303,
1730114570975
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1489/Reviewer_TGSj"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Reviewer_dBZh"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Reviewer_eHC5"
],
[
"ICLR.cc/2025/Conference/Submission1489/Reviewer_dBZh"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Reviewer_TGSj"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Reviewer_ZJce"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission1489/Reviewer_ZJce"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1489/Area_Chair_TE13"
],
[
"ICLR.cc/2025/Conference/Submission1489/Reviewer_eHC5"
]
],
"structured_content_str": [
"{\"comment\": \"The rebuttal has addressed most of my concerns and I lean to accept.\"}",
"{\"title\": \"Thanks a lot\", \"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your acknowledgment and dedication in reviewing our work.\\n\\nThank you\"}",
"{\"title\": \"Response to Reviewer TGSj\", \"comment\": \"We sincerely thank Reviewer TGSj for reviewing this paper.\\n\\n**Q1: Deformable convolution or simple position encoding.** \\n Compared to a simple position encoding mechanism, the deformable cross-attention module provides fine-grained spatial details and local features (using dynamic offsets), which better supports the association with the semantic features (especially for objects with complex structures). Besides, sparse attention with linear complexity is more suitable for VOS than global attention with quadratic complexity.\\n\\n In the table below, we replace deformable cross-attention with traditional cross-attention. The results demonstrate that using deformable cross-attention significantly enhances the model's performance, especially in datasets with complex targets (**\\\\+1.2% gains over the MOSE dataset**).\\n\\n| Datasets | | D17| | | MOSE | | | | YT19 | | | | LVOS | |\\n|----------------------|-------------|-----------|-----------|----------------|--------------|--------------|------------|-----------|-----------|-----------|-----------|------------|----------|----------|\\n| Methods | J&F | J| F |J&F| J| F | J&F | Jseen| Fseen| Junseen | Funseen | J&F| J | F |\\n| Global attention | 86.1 | 82.0 | 90.2 | 67.3 | 63.2 | 71.5 | 86.4 | 85.5 | 90.2 | 81.1 | 88.7 | 64.1 | 59.5 | 68.7 |\\n| Deformable attention | 86.7 | 82.7 | 90.8 | 68.5 | 64.5 | 72.6 | 87.5 | 86.8 | 91.8 | 81.3 | 89.9 | 66.5 | 62.1 | 70.8 |\\n\\n**Q2: Why does adding spatial cues suppress emphasis on the target instance while enhancing object instances with the same semantics?** \\n During the spatial dependence modeling process, deformable attention is employed to allow the model to focus more on detailed information, which inevitably enhances the features of objects of the same class. This will not affect the target association across frames, since the proposed method develops a discriminative query propagation module to distinguish targets in the target association phase. In summary, the spatial cues contribute to more accurate predictions and the discriminative query propagation module ensures correct target associations.\\n\\n**Q3\\uff1a Advantage compared to sam2.** \\n Compared to SAM2, our approach has the following new designs.\\n\\n We introduce spatial-semantic information into VOS by explicitly modeling semantic and spatial information through the design of the SS Block, significantly enhancing the model's understanding of the target. Our model achieves comparable performance against SAM2 on various benchmarks, even when trained on a very small-scale dataset.\\n\\n To handle target association in long-term tracking scenarios, we propose a discriminative query mechanism, which contributes to the comparable performance against SAM2 (**\\\\+7.9% in LVOSV2 and \\\\+1.1% in LVOS V1**). The discriminative query generation effectively extracts and memorizes the salient feature points of the target, facilitating accurate identification and association during long-term tracking. The results on the LVOS dataset further validate this capability (**83.7% on LSVOSv2-val and 73.0% on LVOS-test**).\"}",
"{\"comment\": \"Thank you for your response, which has basically resolved all my questions.\"}",
"{\"title\": \"Thanks a lot\", \"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your acknowledgment and dedication in reviewing our work.\\n\\nThank you\"}",
"{\"comment\": \"Thanks for your response. My concerns have been solved, and I have adjusted my score. Good luck.\"}",
"{\"summary\": \"This paper addresses the complex task of tracking and segmenting multiple similar objects in long-term videos, where identifying target objects becomes challenging due to factors like occlusion, cluttered backgrounds, and appearance changes. To tackle these issues, the authors propose a new framework for robust video object segmentation, focusing on learning spatial-semantic features and generating discriminative object queries.\\n\\nThe framework introduces a spatial-semantic block that combines global semantic embedding with local spatial dependency modeling, which enhances the representation of target objects by capturing both broad context and fine details. Additionally, a masked cross-attention module refines object queries, concentrating on the most distinctive parts of target objects and reducing noise accumulation over time. This approach aids in effective long-term query propagation, a critical factor for high-performance tracking over extended sequences.\\n\\nThe experimental results are strong, showing state-of-the-art performance across several benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper\\u2019s S3 algorithm for Video Object Segmentation (VOS) demonstrates notable strengths:\\n\\n1.Spatial-Semantic Integration: By combining semantic embedding with spatial dependency modeling, it effectively captures complex object structures without requiring extensive ViT retraining.\\n\\n2.Discriminative Query Mechanism: The adaptive query approach improves target focus and reduces noise in long-term tracking, enhancing robustness.\\n\\n3.Extensive Validation: State-of-the-art results on multiple benchmarks highlight its strong generalization across datasets.\", \"weaknesses\": \"1.This paper claims to address the challenges of long-term tracking and segmentation. However, as far as I know, memory mechanisms are crucial for tackling these challenges in long-term tracking and segmentation, yet the authors do not seem to have conducted ablation experiments on the number of frames in the memory bank.\\n\\n2.I believe that the ablation study on the number of queries is insufficient with only 8, 16, and 32 as tested values. A wider range of query counts should be explored to more thoroughly validate the effectiveness of the proposed method.\", \"questions\": \"See weakinesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer dBZh\", \"comment\": \"We sincerely thank Reviewer dBZh for reviewing this paper.\\n\\n**Q1: Missing ablation study about the number of frames used in the long-term memory bank.** \\n We present the experimental results of using different numbers of memorized frames in the following table. The experiments are conducted on the LVOS Val dataset, which shows greater sensitivity to the number of stored frames in the long-term memory.\\n\\n The table shows that the performance increases progressively with an increasing number of stored frames and the improvement saturates beyond a certain point. Considering the used Memory and run time also increase significantly, we set the Memory number as 20 (corresponding to 100 frames, storing one image every 5 frames), which achieves a good balance between accuracy and efficiency. \\n\\n| Memory Numbers | J&F | J | F | Memory | Inference Time |\\n|----------------|-------|--------|--------|---------|--------|\\n| 10 | 68.9 | 64.3 | 73.5 | 2901M | 39min |\\n| 20 | 69.3 | 64.8 | 74.0 | 4169M | 40min |\\n| 30 | 70.1 | 65.4 | 74.8 | 6151M | 46min |\\n| 40 | 71.1 | 66.3 | 76.0 | 8145M | 52min |\\n| 50 | 71.2 | 66.2 | 76.1 | 9203M | 57min |\\n\\n**Q2: A wider range of query counts should be explored to more thoroughly validate the effectiveness of the proposed method.** \\n In the table below, we also provide query numbers with additional values of 4, 10, and 64. The results show that:\\n\\n (1) Increasing the number of queries does not consistently improve the model's performance. Using larger numbers of queries may degrade the performance slightly, as using too many queries will introduce more noise during training and slow down the convergence speed. \\n\\n| Queries | D17 (J&F) | MOSE-val (J&F) | YT19 (J&F) | LVOS (J&F) | YT18 (J&F) |\\n|---------|-------------|----------------|------------|------------|------------|\\n| 64 | 85.0 | 65.5 | 86.7 | 65.2 | 86.6 |\\n| 32 | 85.8 | 68.3 | 86.9 | 64.8 | 86.8 |\\n| 16 | 86.6 | 67.7 | 87.0 | 66.4 | 86.9 |\\n| 10 | 86.4 | 68.2 | 87.0 | 66.4 | 87.1 |\\n| **8** | **86.7** | **68.5** | **87.5** | **66.5** | **87.4** |\\n| 4 | 83.1 | 60.2 | 85.2 | 53.5 | 85.2 |\\n\\n (2) The model achieves the best performance when the number of queries is set to 8. This is primarily because the number of queries has a close relationship with the number of targets. The table below presents the target number statistics across different datasets, which are mostly concentrated between 1 and 5. Setting the query number to 8 effectively covers almost all sequences in the datasets, as verified by the experimental results.\\n\\n| Target numbers | YTVOS19 | MOSE | LVOSv2 |\\n|-------------------|---------|------|--------|\\n| 1 | 168 | 188 | 91 |\\n| 2 | 171 | 58 | 31 |\\n| 3 | 132 | 15 | 8 |\\n| 4 | 26 | 22 | 4 |\\n| 5 | 7 | 11 | 0 |\\n| 6 | 3 | 8 | 2 |\\n| 7 | 0 | 4 | 1 |\\n| >=8 | 0 | 5 | 2 |\"}",
"{\"title\": \"General Response\", \"comment\": \" We thank the reviewers for their feedback and valuable suggestions, which helped us further strengthen the paper.\\n\\n We are glad that all the reviewers found our approach to be **well-motivated** and **effective**, the integration of high-level semantics and low-level spatial cues to be **promising for VOS**, the **thoroughness of our experiments and ablation studies**, and the **state-of-the-art results** achieved on multiple VOS benchmarks. According to the comments, we have presented more details about the algorithm design and provided more experiments. We highlight a couple of common concerns in this general response and please find the detailed feedback under each comment.\\n\\n### **(1) Experiments with a wider range of query counts to more thoroughly validate the effectiveness of the proposed method.** \\n In the table below, we provide results for query numbers 4, 10, and 64. The results show that increasing query numbers does not consistently improve performance, as more queries may introduce noise and cause slow convergence. The best performance is achieved with 8 queries, which are related to the target object number (mostly 1\\u20135) across datasets, effectively covering most samples.\\n| Queries | D17 (J&F) | MOSE-val (J&F) | YT19 (J&F) | LVOS (J&F) | YT18 (J&F) |\\n|---------|-------------|----------------|------------|------------|------------|\\n| 64 | 85.0 | 65.5 | 86.7 | 65.2 | 86.6 |\\n| 32 | 85.8 | 68.3 | 86.9 | 64.8 | 86.8 |\\n| 16 | 86.6 | 67.7 | 87.0 | 66.4 | 86.9 |\\n| 10 | 86.4 | 68.2 | 87.0 | 66.4 | 87.1 |\\n| **8** | **86.7** | **68.5** | **87.5** | **66.5** | **87.4** |\\n| 4 | 83.1 | 60.2 | 85.2 | 53.5 | 85.2 |\\n\\n| Target numbers | YTVOS19 | MOSE | LVOSv2 |\\n|-------------------|---------|------|--------|\\n| 1 | 168 | 188 | 91 |\\n| 2 | 171 | 58 | 31 |\\n| 3 | 132 | 15 | 8 |\\n| 4 | 26 | 22 | 4 |\\n| 5 | 7 | 11 | 0 |\\n| 6 | 3 | 8 | 2 |\\n| 7 | 0 | 4 | 1 |\\n| >=8 | 0 | 5 | 2 |\\n\\n### **(2) More ablation experiments about the Spatial-Semantic block.** \\n To save space, we placed the detailed ablation study in Table 7 in the Appendix section. Table 7 shows that spatial dependence modeling significantly improves performance across all VOS datasets, with a 3.0%+ J&F gain on MOSE val. This improvement comes from better detail modeling during feature extraction. Adding semantic information further boosts performance, reaching 73% J&F on the LVOS test, demonstrating its value in enhancing target semantic understanding. We provide some experiments in the table below. For more details, please refer to Table 7 in the Appendix.\\n\\n| **Dataset** | **MOSE** | **D17 test** | **LVOS test** | **YT19** |\\n|------------------------------------|----------|--------------|---------------|----------|\\n| **Method** | **J&F** | **J&F** | **J&F** | **J&F** |\\n| **Trained on YouTubeVOS and DAVIS datasets** | | | | |\\n| XMem (Baseline) | 53.3 | 81.0 | 50.0 | 85.5 |\\n| Cutie | 64.0 | 84.2 | 56.2 | 86.1 |\\n| +Discriminative Query | 64.2 | 85.2 | 57.4 | 86.5 |\\n| +Spatial | 68.2 | 86.2 | 67.4 | 87.3 |\\n| +Semantic (Full) | 68.5 | 86.7 | 66.5 | 87.5 |\\n| **Trained on the MEGA datasets** | | | | |\\n| Cutie | 69.9 | 86.1 | 66.7 | 87.0 |\\n| +Discriminative Query | 70.6 | 86.6 | 66.5 | 87.5 |\\n| +Spatial | 73.5 | 87.6 | 68.8 | 87.9 |\\n| +Semantic (Full) | 74.0 | 87.8 | 73.0 | 88.1 |\"}",
"{\"title\": \"Thanks\", \"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your acknowledgment and dedication in reviewing our work.\\n\\nThank you\"}",
"{\"summary\": \"This paper focuses on video object segmentation. The authors analyze the existing challenges like structural complexity, occlusion, and dramatic appearance changes, and correspondingly propose spatial-semantic feature augmentation as well as discriminative query association. The ablation studies and visualizations verify the effectiveness of each module.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The motivation is clear and the architecture makes sense. Integrating high-level semantics and low-level spatial cues is promising in video object segmentation.\\n2. The experiments are thorough and the ablation studies can well reflect the effectiveness of each module.\", \"weaknesses\": \"1. The method is complicated. What is the advantage of using spatial offsets with deformable convolution compared to simple position encodings?\\n2. The second row of Figure 3(a) seems strange. With semantic feature augmentation, the feature maps can well highlight the desired object instance. Adding spatial cues on the contrary suppresses the emphasis on the target instance but enhances object instances with the same semantics.\\n3. Compared to SAM2, which designs a memory to prompt the segmentation of new frames, what is the advantage of this architecture?\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Please let us know if we address all the issues\", \"comment\": \"Dear Reviewer,\\n\\nThank you for the comments on our paper. We have provided a response and a revised paper on Openreview. Since the discussion phase ends on Nov 26, we would like to know whether we have addressed all the issues, and we look forward to resolving any additional questions or concerns you may have.\\n\\nThank you again for your time and effort.\\n\\nBest regards\"}",
"{\"title\": \"Please let us know if we address all the issues\", \"comment\": \"Dear Reviewer,\\n\\nThank you for the comments on our paper. We have provided a response and a revised paper on Openreview. Since the discussion phase ends on Nov 26, we would like to know whether we have addressed all the issues, and we look forward to resolving any additional questions or concerns you may have.\\n\\nThank you again for your time and effort.\\n\\nBest regards\"}",
"{\"summary\": \"This paper presents a novel spatial-semantic block that effectively integrates semantic information with spatial features, resulting in a more comprehensive representation of target objects, especially those with complex or distinct parts. By utilizing a pre-trained Vision Transformer (ViT) backbone without the need to retrain all parameters, the proposed method significantly enhances the efficiency of video object segmentation (VOS).\\n\\nAdditionally, the development of a discriminative query mechanism marks a substantial advancement in the field. This mechanism prioritizes the most representative regions of target objects, thereby improving the reliability of target representation and query updates. This is particularly advantageous in long-term video scenarios, where appearance changes and occlusions can lead to noise accumulation during query propagation.\\n\\nThe authors also highlight the importance of learning comprehensive target features that encompass semantic, spatial, and discriminative information. This holistic approach effectively addresses challenges related to appearance variations and identity confusion among similar-looking objects in long-term videos, making it a valuable contribution to the VOS community.\\n\\nFinally, extensive experimental results demonstrate that the proposed method achieves state-of-the-art performance across multiple benchmark datasets, including DAVIS 2017, YouTube VOS 2019, MOSE, and LVOS.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper presents a spatial-semantic modeling method and a discriminative query mechanism that significantly enhance the model's performance. Extensive experiments have been conducted to demonstrate the effectiveness of the model, and several visual examples are provided to clearly illustrate the results at different processing stages. Additionally, the final results showcase the model's considerable potential.\", \"weaknesses\": \"Writing Style:\\n1. The writing language is not concise enough, with many long sentences that significantly reduce readability. This is particularly evident in the introduction, such as on the second page: \\\"We construct a Spatial-Semantic Block comprising a semantic embedding module and a spatial dependencies modeling module to efficiently leverage the semantic information and local details of the pre-trained ViTs for VOS without training all the parameters of the ViT backbone.\\\"\", \"image_details\": \"1. In Figure 2, there are N spatial-semantic blocks, but N is not specified later in the paper.\", \"method\": \"1. In Figure 2, the argmax operation in the distinctive query propagation is non-differentiable. Will this prevent the gradient from being propagated through the model?\\n\\n2. If the introduced ViT backbone is not fine-tuned, will its performance degrade on the new dataset? A comparison experiment between freezing and not freezing the parameters is needed here.\\n\\n3. The number of different queries should be related to the number of targets. However, using 8 queries yields better results. When faced with more than 8 targets, can 8 queries adequately represent the different targets?\\n\\n4. In Table 3, there are two XMem entries, one of which is not referenced. It is unclear what the unreferenced entry represents, and why it lacks FPS results needs to be clarified.\\n\\n5. Table 3 lacks a comparison of Joint Former results trained on the MEGA dataset. Please provide the results for Joint Former trained on the MEGA dataset in detail. If the original Joint Former was not trained on this dataset, can it be trained and then compared for performance?\\n\\n6. The spatial-semantic block consists of two parts: first, the global feature cls token is fused with the semantic features, and then further enhanced through Deformable Cross Attention. It is necessary to separately validate the effects of directly fusing the features versus applying Deformable Cross Attention for further enhancement.\", \"questions\": \"1.In Figure 2, there are N spatial-semantic blocks, but N is not specified later in the paper.\\n\\n2.In Figure 2, the argmax operation in the distinctive query propagation is non-differentiable. Will this prevent the gradient from being propagated through the model?\\n\\n3.If the introduced ViT backbone is not fine-tuned, will its performance degrade on the new dataset? A comparison experiment between freezing and not freezing the parameters is needed here.\\n\\n4.The number of different queries should be related to the number of targets. However, using 8 queries yields better results. When faced with more than 8 targets, can 8 queries adequately represent the different targets?\\n\\n5.In Table 3, there are two XMem entries, one of which is not referenced. It is unclear what the unreferenced entry represents, and why it lacks FPS results needs to be clarified.\\n\\n6.Table 3 lacks a comparison of Joint Former results trained on the MEGA dataset. Please provide the results for Joint Former trained on the MEGA dataset in detail. If the original Joint Former was not trained on this dataset, can it be trained and then compared for performance?\\n\\n7.The spatial-semantic block consists of two parts: first, the global feature cls token is fused with the semantic features, and then further enhanced through Deformable Cross Attention. It is necessary to separately validate the effects of directly fusing the features versus applying Deformable Cross Attention for further enhancement.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"Thanks for your response. My concerns have been solved, and I have adjusted my score. Good luck.\"}",
"{\"title\": \"Response to Reviewer ZJce\", \"comment\": \"We sincerely thank Reviewer ZJce for reviewing this paper.\\n\\n**Q1: Writing** \\n We have rewritten the long sentences to shorter ones and polished the manuscript based on the suggestions to make it concise and clear, which can be seen in the revised version.\\n\\n**Q2: Model Details: N blocks** \\n Our model includes 4 spatial-semantic blocks (i.e. N=4), each interacting with the ViTb (12 layers) model every three ViT layers. This implementation detail is included in the Implementation Details section. As mentioned in the manuscript, we will release our source code and model to the public.\\n\\n**Q3: Argmax operation** \\n We use the Argmax operation to select the index with the highest similarity and do not perform back-propagation. The backpropagation of the relevant part is achieved through gradient flow via skip connections.\\n\\n**Q4: Will the performance on new datasets degrade when the ViT backbone is not fine-tuned\\\\?** \\n Our model was trained on the YT+DAVIS dataset and the MEGA dataset. Without any training on the LVOS dataset, it achieved SOTA results directly during testing (**66.5% and 73.0%**). This demonstrates that our model's performance **does not degrade when transferred to new datasets.** \\n\\n The designed SS block is specifically aimed at better-adapting features from upstream pretrained models to the VOS task. The semantic embedding module integrates semantic information, while spatial dependence modeling facilitates the interaction of multi-scale spatial features. Through this block, we effectively adapt a pretrained model to generate the multi-scale requirements of VOS tasks without the need for fine-tuning.\\n\\n In our ablation study (**Table 1, Row 4**), we directly replaced the feature extraction module with ViT and used FPN to extract multi-scale features. Although this experiment involved full fine-tuning of ViT, it did not significantly improve performance. \\n\\n We attempted to perform full fine-tuning of our complete model on NVIDIA A100 GPUs (40GB). However, the training encountered Out-Of-Memory errors, indicating that full fine-tuning requires more computational resources and training time.\\n\\n**Q5: When faced with more than 8 targets, can 8 queries adequately represent the different targets?** \\n The table below provides query numbers with additional values of 4, 10, and 64. The results show that:\\n (1) Increasing the number of queries does not consistently improve the model's performance. Using larger numbers of queries may degrade the performance slightly, as using too many queries will introduce more noise during training and slow down the convergence speed. \\n\\n| Queries | D17 (J&F) | MOSE-val (J&F) | YT19 (J&F) | LVOS (J&F) | YT18 (J&F) |\\n|---------|-------------|----------------|------------|------------|------------|\\n| 64 | 85.0 | 65.5 | 86.7 | 65.2 | 86.6 |\\n| 32 | 85.8 | 68.3 | 86.9 | 64.8 | 86.8 |\\n| 16 | 86.6 | 67.7 | 87.0 | 66.4 | 86.9 |\\n| 10 | 86.4 | 68.2 | 87.0 | 66.4 | 87.1 |\\n| **8** | **86.7** | **68.5** | **87.5** | **66.5** | **87.4** |\\n| 4 | 83.1 | 60.2 | 85.2 | 53.5 | 85.2 |\\n\\n (2) The model achieves the best performance when the number of queries is set to 8. This is primarily because the number of queries has a close relationship with the number of targets. The table below presents the target number statistics across different datasets, which are mostly concentrated between 1 and 5. Setting the query number to 8 effectively covers almost all sequences in the datasets, as verified by the experimental results.\\n\\n| Target numbers | YTVOS19 | MOSE | LVOSv2 |\\n|-------------------|---------|------|--------|\\n| 1 | 168 | 188 | 91 |\\n| 2 | 171 | 58 | 31 |\\n| 3 | 132 | 15 | 8 |\\n| 4 | 26 | 22 | 4 |\\n| 5 | 7 | 11 | 0 |\\n| 6 | 3 | 8 | 2 |\\n| 7 | 0 | 4 | 1 |\\n| >=8 | 0 | 5 | 2 |\\n\\n**Q6\\uff1aLack reference of the XMem in Table 3.** \\n These two XMem settings represent the same method with different configurations, where the one marked with * is pretrained on static images. We have added the citation in the revised manuscript.\"}",
"{\"title\": \"Response to Reviewer eHC5\", \"comment\": \"We sincerely thank Reviewer eHC5 for reviewing this paper.\\n\\n**Q1\\\\: In Table 1, why is there no separate ablation study for the spatial block and semantic block\\\\?** \\n To save space, we place the detailed ablation study in Appendix Table 7. Table 7 shows that spatial dependence modeling significantly improves performance across all VOS datasets, with a **3.0%+ J&F gain on MOSE val**. This improvement comes from better detail modeling during feature extraction. Adding semantic information further boosts performance, reaching **73% J&F on the LVOS test**, demonstrating its value in enhancing target semantic understanding. We provide some experiments in the table below. For more details, please refer to the Appendix.\\n\\n| **Dataset** | **MOSE** | **D17 test** | **LVOS test** | **YT19** |\\n|------------------------------------|----------|--------------|---------------|----------|\\n| **Method** | **J&F** | **J&F** | **J&F** | **J&F** |\\n| **Trained on YouTubeVOS and DAVIS datasets** | | | | |\\n| XMem (Baseline) | 53.3 | 81.0 | 50.0 | 85.5 |\\n| Cutie | 64.0 | 84.2 | 56.2 | 86.1 |\\n| +Discriminative Query | 64.2 | 85.2 | 57.4 | 86.5 |\\n| +Spatial | 68.2 | 86.2 | 67.4 | 87.3 |\\n| +Semantic (Full) | 68.5 | 86.7 | 66.5 | 87.5 |\\n| **Trained on the MEGA datasets** | | | | |\\n| Cutie | 69.9 | 86.1 | 66.7 | 87.0 |\\n| +Discriminative Query | 70.6 | 86.6 | 66.5 | 87.5 |\\n| +Spatial | 73.5 | 87.6 | 68.8 | 87.9 |\\n| +Semantic (Full) | 74.0 | 87.8 | 73.0 | 88.1 |\\n\\n\\n**Q2\\\\: Some detail issues: In Figure 3, the blue results are not clearly marked and the position of * is not aligned.** \\n We have fixed these details in the revised version.\\n\\n**Q3\\\\: How many trainable parameters and total parameters does the model have\\\\?** \\n In our full version, the total parameters are **226.8 million** and the trainable parameters are **54.1 million**. The trainable parameters are mainly in the Spatial-Semantic Blocks, the target association module, and the decoder. \\n\\n**Q4\\\\: Why does \\\"DepthAnything\\\" achieve the best results\\\\?** \\n DepthAnything provides more powerful representation with pixel-level and depth information compared to other pretrained models, which enables efficient target feature representation across diverse scenarios. This is also validated on instance segmentation in the DepthAnything paper.\\n\\n**Q5: If the backbone of the Cutie model is replaced with ViT, would it achieve good results?** \\n The table below compares Cutie with different backbones and our proposed model. The results show that replacing Cutie's backbone with ViT improves performance slightly (**\\\\+0.3% on YT19 and 0.2% on MOSE**). This is because ViT provides stronger representation capabilities compared to ResNet. However, simply replacing the backbone with ViT fails to produce multi-scale and the associated semantic information. To address this issue, the proposed Spatial-Semantic Block jointly models semantic and multi-scale spatial information, achieving notable improvements across different datasets (**\\\\+1.4% on YT19 and 4.5% on MOSE**). \\n| Methods | Backbone | D17 test (J&F) | YT19 (J&F) | MOSE (J&F) |\\n|---------|-----------|----------------|------------|------------|\\n| Cutie | ResNet50 | 84.2 | 86.1 | 64.0 |\\n| Cutie | ViTb | 84.7 | 86.4 | 64.2 |\\n| Ours | ViTb | 86.7 | 87.5 | 68.5 |\"}",
"{\"title\": \"Please let us know if we address all the issues\", \"comment\": \"Dear Reviewer,\\n\\nThank you for the comments on our paper. We have provided a response and a revised paper on Openreview. Since the discussion phase ends on Nov 26, we would like to know whether we have addressed all the issues, and we look forward to resolving any additional questions or concerns you may have.\\n\\nThank you again for your time and effort.\\n\\nBest regards\"}",
"{\"title\": \"Thank you\", \"comment\": \"Dear Reviewer,\\n\\nSince all the questions have been answered, could you consider raising the rating?\\n\\nThank you,\"}",
"{\"title\": \"Response to Reviewer ZJce (2)\", \"comment\": \"**Q7: JointFormer trained on MEGA datasets.**\\n JointFormer has not released its source code, so we tried our best to reproduce the code based on the paper. Unfortunately, the reproduced results differ from those reported in the original paper, making it impossible for us to compare it with our method on the MEGA dataset. Although it was not possible to train JointFormer on the MEGA dataset, a comparison can still be made using models trained only on the YT+DAVIS dataset. Our model achieves better performance than JointFormer on the large-scale datasets YT19 and YT18 with gains of **1.4%** and **1.3%**, respectively.\\n\\n**Q8: What is the individual impact of directly fusing features versus applying Deformable Cross Attention for further enhancement in the spatial-semantic block?** \\n To save space, we place the detailed ablation study in Appendix Table 7. Table 7 shows that spatial dependence modeling significantly improves performance across all VOS datasets, with a **3.0%+ J&F gain on MOSE val**. This improvement comes from better detail modeling during feature extraction. Adding semantic information further boosts performance, reaching **73% J&F on the LVOS test**, demonstrating its value in enhancing target semantic understanding. We provide some experiments in the table below. For more details, please refer to the Appendix.\\n\\n| **Dataset** | **MOSE** | **D17** | **LVOS test** | **YT19** |\\n|------------------------------------|----------|--------------|---------------|----------|\\n| **Method** | **J&F** | **J&F** | **J&F** | **J&F** |\\n| **Trained on YouTubeVOS and DAVIS datasets** | | | | |\\n| XMem (Baseline) | 53.3 | 81.0 | 50.0 | 85.5 |\\n| Cutie | 64.0 | 84.2 | 56.2 | 86.1 |\\n| +Discriminative Query | 64.2 | 85.2 | 57.4 | 86.5 |\\n| +Spatial | 68.2 | 86.2 | 67.4 | 87.3 |\\n| +Semantic (Full) | 68.5 | 86.7 | 66.5 | 87.5 |\\n| **Trained on the MEGA datasets** | | | | |\\n| Cutie | 69.9 | 86.1 | 66.7 | 87.0 |\\n| +Discriminative Query | 70.6 | 86.6 | 66.5 | 87.5 |\\n| +Spatial | 73.5 | 87.6 | 68.8 | 87.9 |\\n| +Semantic (Full) | 74.0 | 87.8 | 73.0 | 88.1 |\"}",
"{\"title\": \"Please let us know if we address all the issues\", \"comment\": \"Dear Reviewer,\\n\\nThank you for the comments on our paper. We have provided a response and a revised paper on Openreview. Since the discussion phase ends on Nov 26, we would like to know whether we have addressed all the issues, and we look forward to resolving any additional questions or concerns you may have.\\n\\nThank you again for your time and effort.\\n\\nBest regards\"}",
"{\"metareview\": \"This paper proposes a robust video object segmentation framework by learning spatial-semantic features and discriminative object queries to address challenges encountered in the video object segmentation task. The overall idea seems like a combination of existing techniques, yet experimental results are solid. Four reviewers reach a consensus to accept this paper by discussing it with the authors through rebuttals. After reviewing the comments and rebuttals, AC agrees with the merits proposed in this paper. Therefore, AC recommends accepting this paper.\", \"additional_comments_on_reviewer_discussion\": \"All main concerns (e.g., deformable convolution used, advantages with SAM2, missed ablations on the number of frames, comparisons with jointformer, etc.) are addressed by the rebuttal. AC also agrees with the responses of the authors in addressing the concerns. Also, please update the responses in the final version.\"}",
"{\"summary\": \"The authors focus on the problem of video object segmentation in long-term tracking and complex environments. To improve the model's robustness, a spatial-semantic network block is proposed to integrate semantic information with spatial information for video object segmentation. Additionally, a discriminative query mechanism is developed to capture the most representative region of the target for better target representation learning and updating. The proposed method achieves state-of-the-art results on most VOS dataset.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"This paper addresses key issues in current applications of VOS methods: long-term tracking, occlusion, and object representation changes.\\n1.\\tThe proposed approach utilizes semantic and spatial information from upstream pre-trained models, enriching the target's semantic and detailed information. It is novel in the field of VOS. \\n2.\\tThe paper proposes a discriminative query generation mechanism to provide the model with more distinctive target information, which is validated on LVOS datasets.\\n3.\\tThe proposed method is validated on various VOS datasets and achieves state-of-the-art results.\", \"weaknesses\": \"The paper does not have obvious weaknesses, but there are still some issues.\\n1.\\tIn Table 1, why is there no separate ablation study for the spatial block and semantic block? Please provide this part of the experiment.\\n2.\\tSome detail issues: In Figure 3, the blue results are not clearly marked and the position of * is not aligned.\", \"questions\": \"1\\u3001\\tHow many trainable parameters and total parameters does the model have?\\n2\\u3001\\tWhy does \\\"DepthAnything\\\" achieve the best results?\\n3\\u3001\\tIf the backbone of the Cutie model is replaced with ViT, would it achieve good results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
ELQ8X02IEp | Learning Reliable Rules by Re-generating Deep Features | [
"Yuhe Jiang",
"Zexin Xue",
"Xujie Si"
] | Improving the interpretability and reliability of deep learning models is essential for advancing machine learning applications, though it remains a significant challenge. One promising approach is the integration of logical reasoning into deep learning systems. Previous works have demonstrated that SATNet, a differentiable MaxSAT solver, can learn interpretable and reliable rules from input-output examples in puzzle domains. In this work, we propose *Visual SATNet* (Vi-SATNet), an extended version of SATNet capable of learning logical reasoning rules in more general and complex domains, such as the feature space of real-life images. We find that, given a pre-trained deep convolutional neural network (CNN) architecture, a Vi-SATNet layer can be integrated and trained efficiently to learn a set of reasoning rules on the deep features, guiding the classifier’s decision. Vi-SATNets are trained to perform feature re-generation tasks for a given image dataset, where the re-generated features maintain high accuracy when used for image classification, proving their quality. In our experiment on the Imagenette dataset with a pre-trained VGG19 model, masking out 10\% to 80\% of the features results in classification accuracy ranging from 98.50\% to 93.92\% with Vi-SATNet re-generation, compared to 97.07\% to 9.83\% without re-generation. Furthermore, we introduce a visualization method to illustrate the rules learned by Vi-SATNets, thereby enhancing the interpretability of the pre-trained CNN model. | [
"Interpretable ML",
"Neuro-symbolic AI",
"SATNet",
"Logical Reasoning"
] | Reject | https://openreview.net/pdf?id=ELQ8X02IEp | https://openreview.net/forum?id=ELQ8X02IEp | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"jZfgiWbubp",
"dyq3BgNwfF",
"bzmJ9L2aJZ",
"WEUlo0C8ND",
"VCm40kYq3g",
"UckpfdXm6P",
"Oz5Ichkzx6",
"NyRnYTEZDd",
"LQ5PGkcnAc",
"K4Elp4GWSU",
"HYl8LAeeqU",
"HECRQJUhkj"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"decision",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1732228634255,
1730638843714,
1732228545540,
1737524106738,
1734588857978,
1730564074838,
1732228111322,
1732723434016,
1732228217855,
1732228719885,
1730714598181,
1732554982916
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11151/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11151/Reviewer_HqXe"
],
[
"ICLR.cc/2025/Conference/Submission11151/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11151/Area_Chair_RyR1"
],
[
"ICLR.cc/2025/Conference/Submission11151/Reviewer_LBBH"
],
[
"ICLR.cc/2025/Conference/Submission11151/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11151/Reviewer_CKFZ"
],
[
"ICLR.cc/2025/Conference/Submission11151/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11151/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11151/Reviewer_CKFZ"
],
[
"ICLR.cc/2025/Conference/Submission11151/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer HqXe (Part 2)\", \"comment\": \"### Q3: evaluation on real-world datasets.\\n\\n**A3**: We thank the reviewer for consideration on the dataset complexity. We would like to point out that the Imagenette dataset is a subset of ImageNet, which is often considered as very complex real-world images. Comparing to the CIFAR datasets which have a resolution of 32x32, Imagenette has a resolution of 320x320. Nevertheless, our architecture is designed to be generalizable to any dataset provided with the corresponding CNN model. We can indeed include ablation studies on the generalization ability of Vi-SATNet to different datasets and classification models.\\n\\n\\n### Q4: Other related works.\\n\\n**A4**: We thank the reviewer for mentioning these recent works. We will add a subsection under the related work section to discuss abstract rule learning on various tasks including general reasoning on images and syntactic rule learning on textual data.\\n\\n[1] Zhaoyu Li, Jinpei Guo, Yuhe Jiang, and Xujie Si. Learning reliable logical rules with satnet. Advances in Neural Information Processing Systems, 36:14837\\u201314847, 2023.\\n\\n[2] Goemans, M. X. and Williamson, D. P. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115\\u20131145, 1995.\\n\\n[3] Zhendong Lei, Shaowei Cai, Dongxu Wang, Yongrong Peng, Fei Geng, Dongdong Wan, Yiping Deng, and Pinyan Lu. Cashwmaxsat: Solver description. MaxSAT Evaluation, 2021.\\n\\n[4] Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico Kolter. SATNet: Bridging deep learning and\\nlogical reasoning using a differentiable satisfiability solver. In International Conference on Machine Learning, 2019.\"}",
"{\"summary\": \"This manuscript introduces Visual SATNet (Vi-SATNet), an innovative extension of SATNet aimed at learning logical rules within complex feature spaces, particularly those generated by convolutional neural networks (CNNs). By employing Vi-SATNet for feature regeneration, the study illustrates its capacity to improve interpretability in CNNs during image classification tasks. Experiments conducted on datasets such as MNIST and Imagenette reveal promising classification outcomes across different levels of feature masking.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.\\tVi-SATNet extends the capabilities of the original SATNet by enabling the learning of logical reasoning rules within the feature spaces of convolutional neural networks (CNNs). This generalization allows it to operate effectively on complex datasets beyond simple logical puzzles.\\n2.\\tThe study presents a new method for feature regeneration that leverages learned logical rules derived from deep features, thereby enhancing the interpretability and reliability of convolutional neural networks (CNNs).\\n3.\\tVi-SATNet can be seamlessly integrated as a drop-in layer within existing CNN architectures, requiring no retraining or fine-tuning.\", \"weaknesses\": \"1.\\tThe manuscript effectively outlines the architecture of Vi-SATNet and establishes an evaluation framework through feature regeneration tasks, employing cosine similarity and Vi-C agreement as measurable metrics for assessing feature quality. However, it could enhance clarity regarding the Vi-SATNet training process, especially concerning hyperparameter selection.\\n2.\\tThe manuscript describes visualization through minimal significant feature sets (MSFs) and their corresponding receptive fields. However, a more in-depth explanation of how MSFs correlate with specific learned rules would be advantageous. Without clear criteria for evaluating rule quality, the reader may find it challenging to interpret the rules\\u2019 significance. \\n3.\\tTo enhance the study's rigor, it is recommended to include additional datasets, such as CIFAR-10 or other complex real-world datasets.\\n4. Some related works are needed to discuss in the manuscript, such as [1-2].\\n\\n[1] Wei J, Garrette D, Linzen T, et al. Frequency effects on syntactic rule learning in transformers[J]. arXiv preprint arXiv:2109.07020, 2021.\\n[2] Zhang W, Mo S, Liu X, et al. Learning robust rule representations for abstract reasoning via internal inferences[J]. Advances in Neural Information Processing Systems, 2022, 35: 33550-33562.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer HqXe\", \"comment\": \"We appreciate the reviewer's constructive feedback and concise summary of our contributions. Please find our detailed responses to each of the comments in the following.\\n\\n### Q1: Vi-SATNet training process and hyperparameters.\\n\\n**A1**: We thank the reviewer for pointing out the confusion on the hyperparameter selection process. In *Table 3* we include explanations for each of the hyperparameters used during training (taking the Imagenette as an example). It is worth noticing that the Vi-SATNet model is not very sensitive to the values of hyperparameters, which is a nice property inherited from SATNet[4]. During training, we did minimum ablation on the hyperparameters and found insignificant difference in performance. Hence, we present the performance of simple models for evaluation. More results on ablation studies can be included in future work to assess the robustness of Vi-SATNet models.\\n\\n### Table 3: Vi-SATNet hyperparameters for Imagenette.\\n\\n| Symbol | Value | Meaning |\\n|----------|--------------|------------|\\n| H | 7 | Height of feature map |\\n| W | 7 | Width of feature map |\\n| K | 512 | Dimension of the feature vectors |\\n| n | WxH = 49 | Number of variables |\\n| m | 500 | Number of clauses |\\n| mask_ratio | various values | Proportion of missing values in the feature map |\\n\\n\\n\\n\\n### Q2: Interpretation and significance of the learned rules; explanation of MSFs with respect to the rules.\\n\\n**A2**: The detailed explanation of the correlation between MSFs and the learned rules can be illustrated by the following examples.\\n\\nConsider the following example of an image from the \\\"golf ball\\\" class (this image was included in the paper pdf, but you can also find it in the supplementary material, Figure 1b):\\n\\nThere are around *12K* rules extracted from the weight matrix learned for the class \\\"golf ball\\\" in the form of weighted MaxSAT. This set of rules is a discrete representation of what a Vi-SATNet has learned. The rule extraction procedure follows the one presented in [1]. The first few lines of the extracted rules are shown in *Table 1*. Given a target feature vector and its MSFs, we can pinpoint the rules that are related to variables representing the target feature vector and its MSFs, which on average results in a set of less than *100* rules (depending on the size of MSF). The first few lines of the subset for image 1b with f12 as the target feature is shown in *Table 2*. \\n\\n### Table 1: Some rules from the extracted weighted MaxSAT rules for class \\\"golf ball\\\".\\n\\n| Weight | Rule | Meaning |\\n|----------|--------------|------------|\\n| 6 | (!f14 and f12) or (f14 and !f12) | f14 != f12 |\\n| 5 | (!f12 and f3) or (f12 and !f3) | f12 != f3 |\\n| 9 | (f2 and f3) or (!f2 and !f3) | f2 = f3 |\\n| ... | ... | ... |\\n| In total ~12k lines | | |\\n\\n\\n\\n### Table 2: Some rules that are related to the target feature (f12).\\n\\n| Weight | Rule | Meaning |\\n|----------|--------------|------------|\\n| 6 | (!f14 and f12) or (f14 and !f12) | f14 != f12 |\\n| 5 | (!f12 and f3) or (f12 and !f3) | f12 != f3 |\\n| 5 | (!f12 and f4) or (f12 and !f4) | f12 != f4 |\\n| 2 | (!f12 and !f19) or (f12 and f19) | f12 = f19 |\\n| ... | ... | ... |\\n| In total <100 lines | | |\\n\\n\\nNow, each feature vector can be mapped back to the discrete (boolean) space by randomized rounding [2]. Hence, we can deploy an external SAT solver[3] to compute the boolean value of any missing feature vector, given the learned rules (which constraint the relations between the features) and the MSFs (which can be discretized and plugged into the formula as known values). By solely inputting the boolean values of the MSFs, the solver is able to correctly output the boolean value for the target feature (in this example, the target feature is f12, and the MSFs are $<$f11, f13, f14, f19, f26$>$, the solver assigns f12 the value of 1). The ground truth boolean value of the target feature is obtained by applying randomized rounding to the feature vector itself (in this example, the ground truth of f12 is also 1). \\n\\nThe exact same procedure can be applied to a different example from the \\\"parachute\\\" class (please kindly find this image in the supplementary material, Figure 1c), with target feature being f13, and MSF indices being $<$12, 14, 20, 28, 47$<$. The solved value for f13 is 1 and the ground truth value for f13 is 1 as well.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"metareview\": \"The proposed Vi-SATNet extends the capabilities of the original SATNet by enabling the learning of logical reasoning rules within the feature spaces of convolutional neural networks (CNNs). The study presents a new method for feature regeneration that leverages learned logical rules derived from deep features, thereby enhancing the interpretability and reliability of convolutional neural networks (CNNs).However, the explaniation of the logical reasoning for real-life image is still unclear. And the experimental results as well as the analysis are limited.\", \"additional_comments_on_reviewer_discussion\": \"Dear reviewers,\\nThanks a lot for the reviewing work. Have a good holiday!\"}",
"{\"summary\": \"In this paper, the author proposed Visual SATNet (Vi-SATNet) which targets learning logic reasoning rules in a general domain. Specifically, the author trains and evaluates Vi-SATNet on deep feature re-generation and the experiment results show the effectiveness of the proposed methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The experiment results are convincing, showing the effectiveness of the proposed method.\\n2. The author provides detailed definitions and algorithm processes to demonstrate their ideas and contributions.\", \"weaknesses\": \"1. The author proposes a method to learn logical reasoning rules in a more general domain. However, different from the original SATNet, which targets solving Sudoku, a task has a clear logic process, how can we understand the logical reasoning for the real-life images? The author should provide more explanation and analysis.\\n2. In Table 1, the author obtained the best results with regeneration with a 30% mask ratio. However, without regeneration, the model's performance will drop with a higher maks ratio. Why does the model have the best performance with a 30% mask ratio? The author should provide more analysis.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer CKFZ (Part 1)\", \"comment\": \"We thank the reviewer for the nice summary and the thoughtful questions. Below, we address the comments point-by-point.\\n\\n### Q1.1: MSFs are surrounding features.\\n\\n**A1.1**: We completely agree with the reviewer that CNN features are inherently closely related to the surrounding features. However, we would like to point out that *not* all surrounding features are included in the minimal significant feature set (MSFs) found. Specifically, consider the four direct neighbors of a target feature (up, down, left, right): in the MSFs shown in the results, we can see a clear *inclusion* of neighboring features that are in the foreground (if target feature is in the foreground) and *exclusion* of neighboring features that are in the background. This observation precisely shows the meaning of an MSF: only the features that are significant to the generation of the target feature are identified.\\n\\nPlease kindly find a new example in the supplementary material (Figure 1a), which has an MSF that includes a *top left* feature located at position (0,0) for the target feature that is in the *top right* corner located at position (6,0). The two features included in the MSF are both background features (since the target feature is also in the background) and one of them is *not* a neighboring feature of the target feature.\\n\\nWe hope that this example clarifies the concern of MSFs only containing neighboring features.\\n\\n### Q1.2: Rule interpretation from other perspectives.\\n\\n**A1.2**: We thank the reviewer for asking this question. As mentioned in the paper, we chose to present the learned rules via visualization of the receptive fields of MSFs because it is the most straight forward interpretation method. A different method to present the learned rules is by directly showing the extracted propositional formulas on the boolean variables, where each variable represents an abstracted state of the corresponding feature vectors. Consider the following example of an image from the \\\"golf ball\\\" class (this image was included in the paper pdf, but you can also find it in the supplementary material, Figure 1b):\\n\\nThere are around *12K* rules extracted from the weight matrix learned for the class \\\"golf ball\\\" in the form of weighted MaxSAT. This set of rules is a discrete representation of what a Vi-SATNet has learned. The rule extraction procedure follows the one presented in [1]. The first few lines of the extracted rules are shown in *Table 1*. Given a target feature vector and its MSFs, we can pinpoint the rules that are related to variables representing the target feature vector and its MSFs, which on average results in a set of less than *100* rules (depending on the size of MSF). The first few lines of the subset for image 1b with f12 as the target feature is shown in *Table 2*. \\n\\n### Table 1: Some rules from the extracted weighted MaxSAT rules for class \\\"golf ball\\\".\\n\\n| Weight | Rule | Meaning |\\n|----------|--------------|------------|\\n| 6 | (!f14 and f12) or (f14 and !f12) | f14 != f12 |\\n| 5 | (!f12 and f3) or (f12 and !f3) | f12 != f3 |\\n| 9 | (f2 and f3) or (!f2 and !f3) | f2 = f3 |\\n| ... | ... | ... |\\n| In total ~12k lines | | |\\n\\n\\n\\n### Table 2: Some rules that are related to the target feature (f12).\\n\\n| Weight | Rule | Meaning |\\n|----------|--------------|------------|\\n| 6 | (!f14 and f12) or (f14 and !f12) | f14 != f12 |\\n| 5 | (!f12 and f3) or (f12 and !f3) | f12 != f3 |\\n| 5 | (!f12 and f4) or (f12 and !f4) | f12 != f4 |\\n| 2 | (!f12 and !f19) or (f12 and f19) | f12 = f19 |\\n| ... | ... | ... |\\n| In total <100 lines | | |\\n\\n\\nNow, each feature vector can be mapped back to the discrete (boolean) space by randomized rounding[2]. Hence, we can deploy an external SAT solver[3] to compute the boolean value of any missing feature vector, given the learned rules (which constraint the relations between the features) and the MSFs (which can be discretized and plugged into the formula as known values). By solely inputting the boolean values of the MSFs, the solver is able to correctly output the boolean value for the target feature (in this example, the target feature is f12, and the MSFs are $<$f11, f13, f14, f19, f26$>$, the solver assigns f12 the value of 1). The ground truth boolean value of the target feature is obtained by applying randomized rounding to the feature vector itself (in this example, the ground truth of f12 is also 1).\"}",
"{\"comment\": \"Thanks to the authors for their efforts and responses, which have addressed part of my concerns. Hence, I will increase my score to a 6.\"}",
"{\"title\": \"Response to Reviewer CKFZ (Part 2)\", \"comment\": \"### Q2: Input feature maps from other layers in an CNN.\\n\\n**A2**: In this work, we selected the feature maps from the last convolution layers because they are representations at the highest level of abstraction, which should yield most interpretable logical rules. Using feature maps from other convolution layers as the input to our model is definitely possible, and distinct rule sets are anticipated to be learned since the input space is at a different level of abstraction (i.e. the meaning of the boolean variables in the learned rules would be different at each layer). In other words, low-level features would yield a set of low-level reasoning rules, for example, propositional rules that directly reasons on the pixel space, which would be more local and less interpretable.\\n\\n[1] Zhaoyu Li, Jinpei Guo, Yuhe Jiang, and Xujie Si. Learning reliable logical rules with satnet. Advances in Neural Information Processing Systems, 36:14837\\u201314847, 2023.\\n\\n[2] Goemans, M. X. and Williamson, D. P. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115\\u20131145, 1995.\\n\\n[3] Zhendong Lei, Shaowei Cai, Dongxu Wang, Yongrong Peng, Fei Geng, Dongdong Wan, Yiping Deng, and Pinyan Lu. Cashwmaxsat: Solver description. MaxSAT Evaluation, 2021.\"}",
"{\"title\": \"Response to Reviewer LBBH\", \"comment\": \"We thank the reviewer for the time and effort put into evaluating our work. We address each of the concerns in detail below.\\n\\n### Q1: understanding logical reasoning for real-life images.\\n\\n**A1**: To understand logical reasoning for real life images, we need two critical specifications: 1. a proper abstraction (or representation) over the information encoded in a given image (i.e. the construction of predicates and their meaning), and 2. logical rules on top of a given abstraction. \\n\\nIn this work, we mainly focus on the second point and learns propositional rules on top of the abstraction of feature vectors (since classifiers rely on the feature maps to perform predictions). In fact, it is possible to fit other abstractions into our Vi-SATNet models since they are designed to be a drop-in layer given any abstraction (in this work, feature maps) paired with a reasoner (in this work, a classifier).\\n\\n\\n\\n### Q2: performance peak at 30% masking ratio.\\n\\n**A2**: We thank the reviewer for the detailed observation in our results. The difference in the performance of the model from 10% masking to 30% masking is marginal. We have evaluated the model again across **ten** runs and the result is summarized in *Table 4* (we have updated these results accordingly in the revised pdf). Increasing the mask ratio makes the reasoning task more difficult and hence results in lower accuracy. We can see that with re-generation of feature vectors given by our model, the classification accuracy maintained above 90% even when 80% of the values are missing, significantly outperforming the vanilla VGG-19.\\n\\n### Table 4: Classification accuracy with different blur mask ratio on Imagenette. Mean accuracy and error bar reported on 10 runs for each mask ratio.\\n\\n| Mask Ratio (%) | w/o Regeneration (%) | w Regeneration (%) |\\n|----------|----------|----------|\\n| 10 | 97.20$\\\\pm$0.06 | 98.48$\\\\pm$0.01 |\\n| 20 | 83.50$\\\\pm$0.09 | 98.49$\\\\pm$0.01 |\\n| 30 | 46.20$\\\\pm$0.08 | 98.49$\\\\pm$0.02 |\\n| 40 | 23.38$\\\\pm$0.05 | 98.44$\\\\pm$0.03 |\\n| 50 | 11.73$\\\\pm$0.05 | 98.32$\\\\pm$0.03 |\\n| 60 | 9.85$\\\\pm$0.01 | 98.13$\\\\pm$0.05 |\\n| 70 | 9.84$\\\\pm$0.004 | 97.45$\\\\pm$0.05 |\\n| 80 | 9.83$\\\\pm$0.001 | 94.23$\\\\pm$0.10 |\\n| 90 | 9.84$\\\\pm$0.003 | 72.87$\\\\pm$0.18 |\\n| 100 | 9.84$\\\\pm$0.005 | 6.83$\\\\pm$0.11 |\"}",
"{\"summary\": \"This paper extends SATNet by enabling it to learn logical rules from the complex feature space of real-life images, allowing it to regenerate masked features while maintaining high classification accuracy.\\nExperimental results on the features from some pre-trained models show that the learned reasoning rules allow Vi-SATNet to re-generate missing feature vectors accurately.\\nFinally, the authors present a visualization technique to illustrate the rules learned by Vi-SATNet models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Experiments on MNIST and Imagenette datasets demonstrate the effectiveness of Vi-SATNet in feature regeneration and classification, showing impressive results even under high masking ratios.\\n\\nThe visualizations of the learned rules offer an intuitive understanding of feature dependencies.\", \"weaknesses\": \"The visualization of the learned rules is not very clear. Whether in the foreground or background, MSF tends to pay more attention to the position around the target, which is a rather trivial result.\", \"questions\": \"As mentioned in the weaknesses section, the visualization of the learned rules indicates that the target-related MSF pays more attention to its surrounding features, which is a trivial finding. Since CNN features are derived from convolutions with neighboring pixels, the features at each position are inherently closely related to those in the surrounding area. Could you provide some more meaningful rules from different perspectives?\\n\\nThe features in the paper appear to be derived from the last layer of the convolutional module in the model. What would happen if features from other convolutional layers were used? Could different layers with varying depths extract distinct rules?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Dear reviewers, please kindly let us know if you have further questions, concerns or suggestions\", \"comment\": \"Dear reviewers, please kindly let us know if you have further questions, concerns or suggestions regarding to our work, and we would be very happy to answer them.\"}"
]
} |
EKfcngSxwD | Incrementally Adapting Generative Vision-Language Models with Task Codebook | [
"Jinghao Zhou",
"Ahmet Iscen",
"Mathilde Caron",
"Christian Rupprecht",
"Philip Torr",
"Cordelia Schmid"
] | With the help of large-scale pre-training, generative Vision-Language Models (VLMs) have acquired general-purpose capabilities.
As downstream applications diversify, it is imperative for VLMs to learn and adapt continuously without experiencing catastrophic forgetting or necessitating complete retraining.
In this work, we analyze the forgetting behavior of VLMs and propose a solution to enhance their incremental learning abilities.
We introduce a Task Codebook within VLMs, enabling efficient retrieval of task-specific parameters for model adaptation.
Our evaluation encompasses a diverse set of tasks spanning a wide range of visual domains and textual instructions.
Experiments demonstrate that our approach effectively mitigates forgetting, even under highly demanding task sequences. | [
"Generative Vision-Language Models",
"Incremental Learning"
] | Reject | https://openreview.net/pdf?id=EKfcngSxwD | https://openreview.net/forum?id=EKfcngSxwD | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yoNUf0hAFe",
"qEF9bpV5BX",
"ZQVFfPPBJG",
"ZCEj8ax8Cf",
"7TngXQT3xX"
],
"note_type": [
"official_review",
"meta_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1730559779891,
1734584175762,
1731017022891,
1737523853222,
1730543177969
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7645/Reviewer_gpJh"
],
[
"ICLR.cc/2025/Conference/Submission7645/Area_Chair_MuuW"
],
[
"ICLR.cc/2025/Conference/Submission7645/Reviewer_9brX"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7645/Reviewer_SjeC"
]
],
"structured_content_str": [
"{\"summary\": \"This paper enables VLM models to handle multiple visual tasks through task codes, equipping the model with instruction-following capabilities while avoiding catastrophic forgetting in incremental learning. Additionally, the method provides a rich dataset for incremental learning.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. This paper introduces a codebook to encode tasks, enabling the handling of different tasks during the inference phase and preventing catastrophic forgetting in incremental learning. I agree this approach is reasonable and interesting.\\n\\n2. The article proposes a new multi-task dataset that covers various cross-modal tasks and different incremental settings.\\n\\n3. The paper conducts extensive experiments to demonstrate the effectiveness of the method.\", \"weaknesses\": \"In my opinion, this paper's weakness lies in its overlap with the field of multimodal large language models (MLLMs). The authors need to clarify the differences from MLLMs, such as LLava, which excel not only in instruction-following but also show strong zero-shot performance in tasks like captioning, VQA, and OCR. Additionally, models like Ferret demonstrate localization capabilities.\\n\\nTo address this limitation more thoroughly, I suggest the authors:\\n\\nInclude a dedicated section comparing their approach to recent MLLMs like LLava and Ferret, highlighting key differences in architecture, training approach, and capabilities.\\nConduct experiments to compare their method\\u2019s performance to these MLLMs on the proposed benchmark, especially for tasks where MLLMs have demonstrated strong zero-shot performance.\\nDiscuss the potential advantages of their approach over MLLMs in incremental learning scenarios, if applicable.\", \"questions\": \"Please see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper proposes to incrementally adapt generative vision-language models using a task codebook. The paper received scores of 5,6,3. The reviewers found some aspects of the approach interesting. However, the critical issue that was raised was limited novelty. However, there was no rebuttal. The AC agrees with the reviewers' concerns, and recommends reject.\", \"additional_comments_on_reviewer_discussion\": \"There was no discussion because no rebuttal was provided.\"}",
"{\"summary\": \"In this paper, the authors introduced task codebooks to help VLMs adapt to multiple different and diverse tasks while minimizing forgetting. Their codebook involves using multiple task-specific MLPs (that act as values) that each corresponds to each task-specific key. They use the outputs of a certain layer predetermined by a hyperparameter $l$ as the guidance to learn the key that represents the tasks. Then, to determine the task during inference, they simply do nearest neighbor lookup with outputs from the same layer to the different MLPs. They also introduced a benchmark targeted to expose the catastrophic forgetting phenomenon amongst VLMs. The benchmark consists of 36 different datasets across 8 applications and 4 incremental learning setups. In their experiments, they are able to show how their method can successfully learn the correct tasks keys for almost all 36 different tasks. They also showed how their method outperforms models that are trained on multitasking and other anti-forgetting training methods such as prompt/prefix tuning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors introduced an intuitive method to map task keys (outputs of a certain layer) to task values (the corresponding MLPs to deal with the tasks). Their benchmark also encompasses a variety of datasets and setups that surely exposes many models' incapabilities as they forget their knowledge when training workloads come sequentially. Their method also outperforms all other reported models that aim to tackle the same/similar issues, even when it comes to the popular prompt/prefix tuning methods.\", \"weaknesses\": \"1. The idea of using multiple different MLPs for each task sounds familiar when compared to the concept of Mixture-of-Experts (MoE). The paper did not mention this concept, nor did they compare.\\n2. The models evaluated are pretrained on close-source datasets including WebLI-100M, which is supposed to be a carefully curated high-quality dataset. It is possible that due to the difference in data quality, other models may underperform compared to the authors'.\\n3. Using nearest-neighbor lookup sounds simple, but when the number of tasks increases, so does the extra time that lookup incur during inference. In the paper, however, only lookup accuracy is discussed.\\n4. In the end, the paper focuses on \\\"sequential\\\" workloads. This demands that during training, every set of tasks come sequentially instead of randomly. In the experiments, however, the authors demonstrated themselves how using multitask models (pretrained where all training data across different tasks are randomized) mostly outperform their methods.\", \"questions\": \"1. The citation format is incorrect and makes the paper a bit hard to read. All the cited authors and years should have a pair of parentheses around them. Maybe you used \\\\cite instead of \\\\citep?\\n2. Typos: line 148: the two \\\"processing\\\" should be \\\"processor\\\".\\n3. There could be some sort of comparison to demonstrate how this method is better than or unique from using MoE to improve performance.\\n4. Is it possible that the models would perform better because of better training data? Is there a specific reason why you chose to use WebLI as the pretraining dataset?\\n5. It may be better to include a brief study on how much more time this process takes during training/inference.\\n6. This may be a more general question, but why would one prefer sequential workload over multitask/random workload during training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper focus on the continuously adapting vision-language models (VLMs). They assume that the tasks is sequential arrived. To solve this problem, this paper introduce a task codebook mechanism which contains task key and task value. Task key is used to identify the task and task value contains the parameters of adapter which is integrated with the base model for improving performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This is easy to follow. The readers can easily understand what problem this paper is trying to solve and how they do it.\\n \\n2. This diagram is easy to understand.\\n \\n3. This work also introduces a benchmark alongside the method, which completes this work.\", \"weaknesses\": \"1. I have a few concerns about this incremental learning setting: 1) what is the essential challenge of sequential? Current VLM will collect a vast amount of data covering a wide range of tasks. 2) With several new tasks, why don\\u2019t we finetune the model with all these tasks? This method shows better performance than TCIA. 3) I don\\u2019t quite understand what limits the model to performing multi-task fine-tuning. I still think that multi-task fine-tuning is a more feasible solution.\\n \\n2. Is the task key necessary? Because the question is in text format, it\\u2019s easy to classify the task with the input question text. A lot of llm-based models can detect the task type with text. \\n \\n3. In terms of novelty, the novelty of this work is limited. The tasks codebook is a collection of adapters, and the task key seems can be replaced by the inherent reasoning ability of lllms. The PEFT is normal in the finetuning community.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
EKaVO0ceh8 | Projection Optimal Transport on Tree-Ordered Lines | [
"Hoang V. Tran",
"Huyen Trang Pham",
"Tho Tran Huu",
"Minh-Khoi Nguyen-Nhat",
"Thanh Chu",
"Tam Le",
"Tan Minh Nguyen"
] | Many variants of Optimal Transport (OT) have been developed to address its heavy computation. Among them, notably, Sliced Wasserstein (SW) is widely used for application domains by projecting the OT problem onto one-dimensional lines, and leveraging the closed-form expression of the univariate OT to reduce the computational burden. However, projecting measures onto low-dimensional spaces can lead to a loss of topological information. To mitigate this issue, in this work, we propose to replace one-dimensional lines with a more intricate structure, called \emph{tree systems}. This structure is metrizable by a tree metric, which yields a closed-form expression for OT problems on tree systems. We provide an extensive theoretical analysis to formally define tree systems with their topological properties, introduce the concept of splitting maps, which operate as the projection mechanism onto these structures, then finally propose a novel variant of Radon transform for tree systems and verify its injectivity. This framework leads to an efficient metric between measures, termed Tree-Sliced Wasserstein distance on Systems of Lines (TSW-SL). By conducting a variety of experiments on gradient flows, image style transfer, and generative models, we illustrate that our proposed approach performs favorably compared to SW and its variants. | [
"projection optimal transport",
"optimal transport"
] | Reject | https://openreview.net/pdf?id=EKaVO0ceh8 | https://openreview.net/forum?id=EKaVO0ceh8 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"kkcbxejigl",
"kPL8ZNDxIT",
"i0AxdhAdl8",
"f98GYcfwDA",
"er4pcQBTw5",
"dgxa55mkZ3",
"aFgrMJnQZX",
"ZtYj11s8gj",
"ZbJk36vky7",
"YU7r3aUxAh",
"YIy4VLbtdD",
"XiEDqIS6dh",
"Tp6HWlhoo7",
"SLW2YYmuM6",
"R89xjEkZXb",
"POJJmOwM3m",
"PNS0DXgoH9",
"OsD7wkJntq",
"MeEzX2Yxet",
"KV2ZdiIRS8",
"JhMVOo4asd",
"IgNXQLENjC",
"INAA5BlNWc",
"IL6wANg5x8",
"G88BMjFTSn",
"ChOVg3GRxx",
"A2oIBCJeCW",
"8V10A2xkfE",
"6yjEjovMMM",
"4hW9Rq5fS7",
"3zYyICua2b",
"1ieSTaYlAp",
"0uTcfKufCu",
"0NpVLjIrFX"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732222549080,
1731991069881,
1732016967507,
1732353829830,
1732354999757,
1732184283039,
1732222457348,
1732879108234,
1732001082860,
1731990986260,
1732879652077,
1730721809865,
1732184261315,
1734739319692,
1730560062679,
1732001172456,
1732222772261,
1732001022587,
1730021617231,
1730289273747,
1732187836724,
1732390920188,
1732188817926,
1732001049752,
1732182698344,
1731926413376,
1732187656893,
1732530227136,
1737523868943,
1732222351249,
1732186707298,
1733160688848,
1732491811823,
1732182651917
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Reviewer_HzfV"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Reviewer_tDY2"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Area_Chair_miNC"
],
[
"ICLR.cc/2025/Conference/Submission7837/Reviewer_MtTT"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Reviewer_HzfV"
],
[
"ICLR.cc/2025/Conference/Submission7837/Reviewer_7kaA"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7837/Reviewer_tDY2"
],
[
"ICLR.cc/2025/Conference/Submission7837/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Any Questions from Reviewer 7kaA on Our Rebuttal?\", \"comment\": \"We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.\\n\\nWe would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.\\n\\nWe would be happy to do any follow-up discussion or address any additional comments.\"}",
"{\"comment\": \"**W1. The poor evaluation is the main weakness of the paper. The authors provided several experiments, but only Tables 1 and 2 offer an expressive justification of the proposed method.**\\n\\n**Answer.** The paper proposes a straightforward alternative to Sliced Wasserstein (SW) by substituting lines with tree systems, focusing primarily on comparing Tree-Sliced Wasserstein (TSW-SL) with the original SW. A significant advantage of tree systems is that they *serve as a simple replacement for lines while preserving all the beneficial properties of lines in SW*.\\n\\nEnhancements to TSW-SL could be achieved by incorporating advanced techniques designed for SW, as tree systems can be locally considered as lines. Techniques applied to lines, such as the Generalized Sliced Wasserstein [2], could similarly be adapted for TSW-SL. However, while these extensions may appear straightforward, their implementation involves significant challenges. For instance, it is crucial to ensure that any TSW-SL variant retains the injectivity of the corresponding Radon Transform. \\n\\nGiven the extensive research and development of improved SW variants over the years, we do not anticipate TSW-SL to surpass the performance of more recent SW variants. Our focus remains on establishing the foundational aspects of TSW-SL, leaving further developments for future research.\\n\\n**Q4. Table 4 does not demonstrate that the proposed method actually provides better results in FID.**\\n\\nRegarding Table 4, which relates to Denoising Diffusion Models, we closely follow [3], the most recent paper evaluating SW methods in diffusion models, to the best of our knowledge. Table 4 reports:\\n\\n- The FID for the vanilla SW method is 2.90.\\n- The FIDs for four variants using random-path projecting directions from [3] are 2.70, 2.82, 2.87, and 2.88.\\n\\nBy merely replacing the foundation lines with tree systems, without introducing new techniques, our model using TSW-SL achieves an FID of 2.83. As noted by reviewer 7kaA, extensions of TSW-SL are also under consideration at this venue. One such submission can be found at [this link](https://openreview.net/forum?id=OiQttMHwce), where models using TSW-SL with additional techniques achieve FIDs of 2.60 and 2.525.\\n\\nFor these reasons, we believe the paper should be evaluated by the development of tree systems and the corresponding Radon Transform aspects (*i.e., compare apple-to-apple*), rather than focusing on numerical comparisons with state-of-the-art methods.\\n\\n---\\n\\nWe sincerely thank the reviewer for the valuable feedback. If our responses adequately address all the concerns raised, we kindly hope the reviewer will consider raising the score of our paper.\"}",
"{\"comment\": \"**Comment 3. **Line 467:** The loss should not be denoted as $\\\\mathcal{L}$ since this letter represents a line.**\\n\\n**Answer.** Thank you for your correction. We have made the adjustments in our manuscript.\\n\\n**Comment 4. **Section 6.2:** It is difficult to visually confirm that *TSW-SL produces images that most closely resemble the target*.**\\n\\n**Answer.** Figure 5 showcases both qualitative and quantitative results. Qualitatively, methods that yield images with colors more closely matching the target image are considered superior. The figure displays the generated images and the corresponding 2-Wasserstein distances at the final epoch, with a smaller 2-Wasserstein distance indicating better performance.\\n\\n**For a more detailed qualitative comparison of our methods with SW, MaxSW, and other SW variants, we provide an additional example of the color-transfer task in Appendix E.2**. This example further demonstrates that our TSW-SL and MaxTSW-SL loss functions, when used to measure the distance between two color distributions, achieve superior results both qualitatively and quantitatively.\\n\\n**Comment 5. **Appendix p.17, line 870:** $\\\\overline{L}$ should be $\\\\mathcal{L}$.**\\n\\n**Answer.** Thank you for your correction. We have made the adjustments in our revision.\\n\\n\\n**Comment 6. **Example A.14 is unclear:** $n_i$ should have length 6 (i.e., $i = 0$ to $5$). What do the arrows signify? Additionally, an arrow seems to be missing on line 1035. The progression through step $i$ is confusing.**\\n\\n**Answer.** Thank you for your correction. We have revised the Example A.14 in our revision as follows:\\n\\n**Definition A.13.** Let $T$ be a nonnegative integer, and $n_1, \\\\ldots, n_T$ be $T$ positive integer. A sequence $s = \\\\\\\\{x_i\\\\\\\\}\\\\_{i=0}^T$, where $x_i$ is a vector of $n_i$ nonnegative numbers, is called a *tree representation* if $x_0 = [1]$, and for all $ 1 \\\\le i \\\\le T$, $n_i$ is equal to the sum of all entries in vector $x_{i-1}$.\\n\\n\\n**Example A.14.** For $T = 5$ and $\\\\{n_i\\\\}_{i=1}^{5} = \\\\{1,3,4,2,3\\\\}$, the sequence\\n $s ~ \\\\colon ~ x_0 = [1] \\n \\\\rightarrow x_1 = [3] \\n \\\\rightarrow x_2 = [2,1,1] \\n \\\\rightarrow x_3 = [1,0,2,0] \\\\rightarrow x_4 = [1,2] \\n \\\\rightarrow x_5 = [0,0,1].$ \\nis a tree representation.\\n\\n**Comment 7. **Page 25:** Please provide more detail about the transition from eq. (46) to eq. (47).**\\n\\n**Answer.** We recall the transition from Eq. (46) to Eq. (47).\\n\\n> $\\\\displaystyle \\\\int_{\\\\mathbb{T}} \\\\biggl(\\\\text{W}\\\\_{d\\\\_\\\\mathcal{L},1}(\\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_1, \\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_2) + \\\\text{W}\\\\_{d\\\\_\\\\mathcal{L},1}(\\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_2, \\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_3) \\\\biggr) ~d\\\\sigma(\\\\mathcal{L}) \\\\ge \\\\int_{\\\\mathbb{T}} \\\\text{W}\\\\_{d\\\\_\\\\mathcal{L},1}(\\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_1, \\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_3) ~d\\\\sigma(\\\\mathcal{L})$\\n\\nThis transition comes directly from the inequality\\n\\n$\\\\text{W}\\\\_{d\\\\_\\\\mathcal{L},1}(\\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_1, \\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_2) + \\\\text{W}\\\\_{d\\\\_\\\\mathcal{L},1}(\\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_2, \\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_3) \\\\ge \\\\text{W}\\\\_{d\\\\_\\\\mathcal{L},1}(\\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_1, \\\\mathcal{R}^\\\\alpha\\\\_\\\\mathcal{L} \\\\mu\\\\_3),$\\n\\nsince the Wasserstein distance $\\\\text{W}\\\\_{d\\\\_\\\\mathcal{L},1}$ is a metric on the space of all measures on $\\\\mathcal{L}$.\\n\\n---\\n\\nWe sincerely thank the reviewer for the valuable feedback. If our responses adequately address all the concerns raised, we kindly hope the reviewer will consider raising the score of our paper.\"}",
"{\"comment\": \"Thank you for addressing my (minor) concerns. I am satisfied with the content of this paper and would recommend acceptance.\"}",
"{\"title\": \"Thanks for your endorsement!\", \"comment\": \"Thanks for your response, and we appreciate your endorsement.\"}",
"{\"title\": \"General Response (2/2)\", \"comment\": \"**Q2. Evaluation of TSW-SL.**\\n\\nThe paper proposes a straightforward alternative to Sliced Wasserstein (SW) by substituting lines with tree systems, focusing primarily on comparing Tree-Sliced Wasserstein (TSW-SL) with the original SW. A significant advantage of tree systems is that they *serve as a simple replacement for lines while preserving all the beneficial properties of lines in SW*.\\n\\nEnhancements to TSW-SL could be achieved by incorporating advanced techniques designed for SW, as tree systems can be locally considered as lines. Techniques applied to lines, such as the Generalized Sliced Wasserstein [1], could similarly be adapted for TSW-SL. However, while these extensions may appear straightforward, their implementation involves significant challenges. For instance, it is crucial to ensure that any TSW-SL variant retains the injectivity of the corresponding Radon Transform. \\n\\nFrom an empirical perspective, replacing lines with tree systems shows promising results. To illustrate this, let's consider the Denoising Diffusion Models task in our paper. We closely follow [2], which, to the best of our knowledge, is the most recent paper evaluating SW methods in diffusion models. Table 4 reports the following:\\n\\n- The FID for the vanilla SW method is 2.90.\\n- The FIDs for four variants using random-path projecting directions from [2] are 2.70, 2.82, 2.87, and 2.88.\\n\\nBy merely replacing the foundation lines with tree systems, without introducing new techniques, our model using TSW-SL achieves an FID of 2.83. Models using TSW-SL with additional techniques (in the mentioned submission), achieve FIDs of 2.60 and 2.525.\\n\\nGiven the extensive research and development of improved SW variants over the years, we do not anticipate TSW-SL to surpass the performance of more recent SW variants. Our focus remains on establishing the foundational aspects of TSW-SL, leaving further developments for future research.\\n\\n---\\n\\n**Reference.**\\n\\n[1] Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, and Gustavo Rohde. Generalized\\nsliced wasserstein distances. Advances in neural information processing systems, 32, 2019.\\n\\n[2] Khai Nguyen, Shujian Zhang, Tam Le, and Nhat Ho. Sliced wasserstein with random-path projecting directions. In Forty-first International Conference on Machine Learning, 2024.\\n\\n---\\n\\nWe are glad to answer any further questions you have on our submission.\"}",
"{\"title\": \"Any Questions from Reviewer MtTT on Our Rebuttal?\", \"comment\": \"We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.\\n\\nWe would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.\\n\\nWe would be happy to do any follow-up discussion or address any additional comments.\"}",
"{\"title\": \"Reply to reviewer MtTT\", \"comment\": \"Dear Reviewer MtTT,\\n\\nThank you for your detailed feedback and thoughtful analysis. We appreciate the time you\\u2019ve taken to evaluate our work and provide constructive insights.\\n\\nWe would like to further address your concerns regarding the significance of our proposed framework below.\\n\\n- **Application to ML-related problems**: While we understand the importance of demonstrating the relevance of our framework to real-world ML-related problems, we emphasize that the primary contribution of our paper is the introduction of tree systems as a novel integration domain within the Sliced Wasserstein (SW) framework. As noted in the revised paper (Line 377), our goal is to present this as an alternative to traditional lines in SW, with the focus being on comparing TSW-SL with the original SW, rather than developing a loss that can achieve state-of-the-art results in a specific domain like generative models.\\n\\n- **Soundness of Our Proposed Distance in the Optimal Transport Field**: To the best of our knowledge, **our work introduces the first method to formally construct a tree system that preserves topological properties, accommodates dynamic supports, and achieves computational efficiency comparable to the Sliced Wasserstein (SW) distance**. TSW-SL serves as a bridge between SW and TSW, leveraging the increased flexibility of TSW while retaining the dynamic adaptability characteristic of SW. We believe our method is non-trivial for several reasons. Firstly, it opens up new avenues for efficient computation in Optimal Transport, overcoming the limitations of both SW and TSW. As discussed in our global aspect of our General Response, the framework we present lays the groundwork for future variants of TSW-SL, which could retain its advantages while incorporating more sophisticated improvements. For example, future enhancements could involve refining the splitting maps to incorporate positional information from both points and lines, rather than relying solely on lines as in the current implementation. This idea is a central part of an extension to the TSW-SL framework, referenced by Reviewer 7kaA, and is also being submitted to this venue, with access available [here](https://openreview.net/forum?id=OiQttMHwce).\\n- **Broader impacts of our works in real-world applications**: Rather than generative models for images, the need to efficiently compare probability distributions in applications such as Visual Place Recognition [7] and natural language processing tasks [8, 9, 10], where slow algorithms like Sinkhorn are often used despite SW\\u2019s promise since its computation relies on the projection of the supports in the original space into 1-dimensional space, which causes the loss of topological information. We believe that the introduction of our TSW-SL and other extensions of TSW-SL in the future will be a better option in assisting to advance these fields.\\n\\n- **The choice of experimental tasks**: We choose to report the superiority of our methods compared with other SW-variants, varying from the synthetic task (gradient flow) and real tasks (color transfer and generative models including GAN and DDGAN) since each experiment aims to provide better aspects to highlight the empirical advantage of our methods compared with SW. From synthetic tasks to real-world problems such as generative models, we conduct experiments across various datasets, ranging from simpler datasets like CIFAR-10 (32\\u00d732) to more complex datasets like STL-10 (96\\u00d796) and CelebA (64\\u00d764). Most results show that our methods consistently outperform SW. Recognizing that experiments with SN-GAN are sufficient for simpler models, we extend our experiments to DDGAN due to its superior image generation quality. The results also indicate that our models yield better results than SW. We do not report the FID score in mean and standard deviation in DDGAN as in SN-GAN since the experiments with DDGAN take a long time to train. Most of the baselines we compare with are from [6], and this work does not provide the standard deviation for the DDGAN experiment due to long training time. With a resource constraint, we cannot repeat the training multiple times to report mean and standard deviation results.\\n\\nOf all the broader impacts mentioned above, we choose to conduct our experiments on gradient flow, color transfer, and generative models since in the SW literature, there are existing works that validate their empirical advantages through these experimental settings (gradient flow [1, 2], Generative Adversarial Networks [3, 4], color transfer [5], Denoising Diffusion Model [6]). We think that comparing our methods with the same setting as existing works can help better for a fair comparison and alleviate the need to re-implement all previous methods in new tasks, thus completing the broad picture of our evaluation.\"}",
"{\"comment\": \"**Q4 + W1. As noted in the conclusion, the paper \\\"introduces a straightforward alternative to SW by replacing one-dimensional lines with tree systems,\\\" aiming to provide a more geometrically meaningful space. This objective aligns with several SW variants ... it remains unclear why and under what circumstances TSW-SL should be preferred over SW or its many variants.**\\n\\n**It is stated in the conclusion that improved performances are expected by adapting recent advanced sampling schemes or techniques of SW to TSW-SL. It seems that the extension is not that straightforward: can you comment on that, and how improved performances can be expected?**\\n\\n**Answer:** Thanks for your nice question. We are eager to elaborate further on it. Roughly speaking, the tree-sliced framework in our paper is built on two key insights:\\n\\n- Local Perspective: Each line in a tree system is treated similarly to a line in the Sliced Wasserstein (SW) framework. Splitting maps determine how the mass at each point is distributed across the lines, and then the projection of these mass portions onto the lines is processed in the same way as in SW. A significant challenge with this approach is verifying whether the injectivity of the corresponding Radon Transform is preserved, as this determines whether the proposed metric qualifies as a true metric or merely a pseudo-metric. However, we addressed this concern by providing a proof in the paper.\\n- Global Perspective: Tree structures and splitting maps establish connections between the lines, creating a cohesive system. This introduces a novel aspect compared to SW, enabling interaction and integration among the lines in a tree system. The Wasserstein distance can now be computed on this space with a closed-form expression, analogous to how lines are treated in SW.\\n\\nFrom these insights, several follow-up directions for TSW-SL can be pursued, focusing on each perspective:\\n\\n- Local Perspective: Consider, for example, the Generalized Sliced Wasserstein (GSW) distance [1]. In GSW, the SW framework is retained, but the projection mechanism is altered. Specifically, GSW generalizes the integration level set in the Radon Transform, replacing the level set defined by the inner product (representing orthogonal hyperplanes) with one defined by an arbitrary function. Similarly, in TSW-SL, which currently relies on the inner product, a framework could be developed that generalizes this by using an arbitrary function, offering new flexibility and applications.\\n\\n- Global Perspective: More detailed analysis can aid in designing improved tree structures and splitting maps. For instance, splitting maps could be enhanced to account for both positional information from points and lines, rather than relying solely on lines as in the current implementation. This approach forms the core concept of one extension of the TSW-SL paper, which has also been submitted to this venue and can be accessed [here](https://openreview.net/forum?id=OiQttMHwce).\\n\\nIt is important to note that while these ideas might seem straightforward, their development is non-trivial. For example, a critical property that must be preserved in TSW-SL variants is the injectivity of the corresponding Radon Transform. Additionally, from an empirical perspective, replacing lines with tree systems shows promising results. To illustrate this, let's consider the Denoising Diffusion Models task in our paper. We closely follow [2], which, to the best of our knowledge, is the most recent paper evaluating SW methods in diffusion models. Table 4 reports the following:\\n\\n- The FID for the vanilla SW method is 2.90.\\n- The FIDs for four variants using random-path projecting directions from [2] are 2.70, 2.82, 2.87, and 2.88.\\n\\nBy merely replacing the foundation lines with tree systems, without introducing new techniques, our model using TSW-SL achieves an FID of 2.83. Models using TSW-SL with additional techniques (in the mentioned submission), achieve FIDs of 2.60 and 2.525.\\n\\nWe believe that our response regarding the both perspectives address the reviewer's question effectively. We hope this paper serves as a catalyst for a new research direction in the field, and we are actively working on advancing these ideas further.\"}",
"{\"comment\": \"We appreciate the reviewer\\u2019s feedback and have provided the following responses to address the concerns raised about our paper. Below, we summarize the weaknesses and questions highlighted by the reviewer and provide our answers accordingly.\\n\\n---\\n\\n**Q1. What should I observe in Figure 5? The images appear identical to me. More expressive examples of the color transfer are necessary.**\\n\\n**Answer.** The primary objective of the color-transfer task is to navigate along the curve connecting the color distributions of the source and target images. In details:\\n\\n- We employ our TSW-SL and MaxTSW-SL loss functions to minimize the distance between the color distributions of the generated and target images.\\n\\n- The 2-Wasserstein distance serves as the evaluation metric for the similarity between the generated and target images, with a smaller 2-Wasserstein distance indicating better performance.\\n\\nFigure 5 showcases both qualitative and quantitative results. Qualitatively, methods that yield images with colors more closely matching the target image are considered superior. The figure displays the generated images and the corresponding 2-Wasserstein distances at the final epoch.\\n\\nFor a more detailed qualitative comparison of our methods with SW, MaxSW, and other SW variants, we provide an additional example of the color-transfer task in Appendix E.2. This example further demonstrates that our TSW-SL and MaxTSW-SL loss functions, when used to measure the distance between two color distributions, achieve superior results both qualitatively and quantitatively.\\n\\n**Q2. Why did you consider your distance as the regularization for the GAN model?**\\n\\n**Answer.** In the experiment with the GAN model discussed in Section 6.3, we use TSW-SL as the loss function to compute the distance between the distributions of the target image and the generated image. Specifically, we employ it as the generator loss in the GAN model, not as a regularization term. This approach is closely based on the methodology of the Sliced Wasserstein generator [1], with details provided in the Appendix E.3.\\n\\n**Q3. In my understanding, gradient flow methods from section 6.1 in higher dimensions would provide better evidence. The paper demonstrates how the proposed distance can help decrease the distance to the target datasets. However, there is no evaluation showing that the computed distance is actually closer to the real Wasserstein distance provided.**\\n\\n**Answer.** It appears that the reviewer may have misunderstood the role of TSW-SL, MaxTSW-SL, and other SW-variants in our experiments. To clarify, these are utilized as loss functions.\\n\\nAs described in Section 6.1 of the main text (Lines 408\\u2013413), the objective of this task is to update the source distribution to make it as close to the target distribution as possible. The empirical advantage of our TSW-SL over other SW-variants is demonstrated by measuring the 2-Wasserstein distance between the source and target distributions at steps 500, 1000, 1500, 2000, and 2500. In this task:\\n\\n- TSW-SL, MaxTSW-SL, and other SW-variants are used as the loss function.\\n- The 2-Wasserstein distance serves as the evaluation metric.\\n\\nIt is important to emphasize that the roles of the 2-Wasserstein distance (evaluation metric) and the other distances, including TSW-SL, MaxTSW-SL, and SW-variants (loss functions), are distinct. Therefore, it is *not the objective of this task to identify a distance that more closely approximates the true Wasserstein distance*.\"}",
"{\"title\": \"Reply to reviewer MtTT (Cont.)\", \"comment\": \"- **GAN Experiment Results and Their Significance:** We acknowledge your concern regarding the modest numerical gains in FID scores and the possible variability of these metrics. Nevertheless, we believe that the improvements\\u2014**22.43% over DDGAN and 2.4% over SW**\\u2014are significant, especially considering the already low baseline FID scores. Furthermore, when placing our results within the broader context of image generation benchmarks, such as those on CIFAR-10, our approach ranks among the **top 49 methods**. This highlights the effectiveness of our method in advancing diffusion models.\\n\\n- **Log scale of our newly added figure for FID plots over epochs:** As noted in Appendix E4 of our manuscripts, we utilize a logarithmic scale in FID plots due to the wide range of values (from over 400 in initial epochs to less than 3.0 in final epochs). We think that this scale offers clearer visualization, and we will include the FID plot in the original scale in the final revision of our manuscript.\\n\\nThank you again for your thoughtful feedback, and we are happy to address any further concerns regarding our work.\\n\\nBest regards, \\nThe Authors \\n\\n---\\n\\n### **References**\\n\\n[1] Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, and Gustavo Rohde. Generalized sliced wasserstein distances. Advances in neural information processing systems, 32, 2019.\\n\\n[2] Nadjahi, K., Durmus, A., Jacob, P. E., Badeau, R., & Simsekli, U. (2021). Fast approximation of the sliced-Wasserstein distance using concentration of random projections. Advances in Neural Information Processing Systems, 34, 12411-12424.\\n\\n[3] Khai Nguyen and Nhat Ho. Sliced wasserstein estimation with control variates. In The Twelfth International Conference on Learning Representations, 2024.\\n\\n[4] Khai Nguyen, Nhat Ho, Tung Pham, and Hung Bui. Distributional sliced-wasserstein and applications to generative modeling. In International Conference on Learning Representations, 2021.\\n\\n[5] Khai Nguyen, Nicola Bariletto, and Nhat Ho. Quasi-monte carlo for 3d sliced wasserstein. In The Twelfth International Conference on Learning Representations, 2024a.\\n\\n[6] Khai Nguyen, Shujian Zhang, Tam Le, and Nhat Ho. Sliced wasserstein with random-path projecting directions. In Forty-first International Conference on Machine Learning, 2024.\\n\\n[7] Izquierdo, S., & Civera, J. (2024). Optimal transport aggregation for visual place recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 17658-17668).\\n\\n[8] Wu, X., Dong, X., Nguyen, T. T., & Luu, A. T. (2023, July). Effective neural topic modeling with embedding clustering regularization. In International Conference on Machine Learning (pp. 37335-37357). PMLR.\\n\\n[9] He Zhao, Dinh Phung, Viet Huynh, Trung Le, and Wray Buntine. Neural topic model via optimal transport. arXiv preprint arXiv:2008.13537, 2020.\"}",
"{\"summary\": \"The authors proposed a computation of the Sliced Wasserstein (SW) distance using a novel projections on systems of lines and achieved improvements compared to the original SW approach.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Clarity:\\nThe paper is well-written and logically structured, making it easy to follow.\", \"originality\": \"The approach of efficiently solving sliced optimal transport using tree-based priors presents a novel and intriguing perspective.\", \"significance\": \"This research represents a crucial advancement in the development of more efficient closed-form tools for computing Wasserstein distances, which is an important area of study.\", \"weaknesses\": \"Quality:\\nThe poor evaluation is the main weakness of the paper. The authors provided several experiments, but only Tables 1 and 2 offer an expressive justification of the proposed method.\", \"questions\": \"What should I observe in Figure 5? The images appear identical to me. More expressive examples of the color transfer are necessary.\\nWhy did you consider your distance as the regularization for the GAN model? In my understanding, gradient flow methods from section 6.1 in higher dimensions would provide better evidence.\\nThe paper demonstrates how the proposed distance can help decrease the distance to the target datasets. However, there is no evaluation showing that the computed distance is actually closer to the real Wasserstein distance provided. I think it is crucial to demonstrate this across a range of datasets.\\nTable 4 does not demonstrate that the proposed method actually provides better results in FID.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"General Response (1/2)\", \"comment\": \"Dear AC and reviewers,\\n\\nThanks for your thoughtful reviews and valuable comments, which have helped us improve the paper significantly.\\n\\nWe sincerely thank the reviewers for their valuable feedback and constructive suggestions. We are encouraged by the positive endorsements regarding the following aspects of our work:\\n\\n1. The derivation is clear, well-structured, and provides a thoroughly developed framework. (Reviewers tDY2, MtTT, 7kaA, HzfV)\\n\\n2. The proposed metric is indeed a valid metric and can be approximated efficiently via a closed-form expression with the same computational complexity as Sliced-Wasserstein (with respect to samples). (Reviewers tDY2, 7kaA, HzfV)\\n\\n3. The proposed metric demonstrates consistently better performance compared to Sliced-Wasserstein in tasks such as Gradient Flows, Color Transfer, and Generative Models. (Reviewers tDY2, 7kaA, HzfV)\\n\\n---\\n\\nBelow, we address some common points raised in the reviews:\\n\\n**Q1. Key contribution of Tree-sliced framework and promising directions for future work.**\", \"the_tree_sliced_framework_in_our_paper_is_built_on_two_key_insights\": [\"Local Perspective: Each line in a tree system is treated similarly to a line in the Sliced Wasserstein (SW) framework. Splitting maps determine how the mass at each point is distributed across the lines, and the projection of these mass portions onto the lines is then processed in the same way as in SW. A significant challenge with this approach lies in verifying whether the injectivity of the corresponding Radon Transform is preserved, as this determines whether the proposed metric is a true metric or merely a pseudo-metric. However, we addressed this concern by providing a proof in the paper.\", \"Global Perspective: Tree structures and splitting maps establish connections between the lines, creating a cohesive system. This introduces a novel aspect compared to SW, enabling interaction and integration among the lines in a tree system. The Wasserstein distance can now be computed on this space with a closed-form expression, analogous to how lines are treated in SW.\", \"From these insights, several follow-up directions for TSW-SL can be pursued, focusing on each perspective:\", \"Local Perspective: Consider, for example, the Generalized Sliced Wasserstein (GSW) distance [1]. In GSW, the SW framework is retained, but the projection mechanism is altered. Specifically, GSW generalizes the integration level set in the Radon Transform, replacing the level set defined by the inner product (representing orthogonal hyperplanes) with one defined by an arbitrary function. Similarly, in TSW-SL, which currently relies on the inner product, a framework could be developed that generalizes this by using an arbitrary function, offering new flexibility and applications.\", \"Global Perspective: More detailed analysis can aid in designing improved tree structures and splitting maps. For instance, splitting maps could be enhanced to account for both positional information from points and lines, rather than relying solely on lines as in the current implementation. This idea is central to an extension of the TSW-SL paper, which is referenced by Reviewer 7kaA and has also been submitted to this venue. It can be accessed [here](https://openreview.net/forum?id=OiQttMHwce).\", \"It is important to note that while these ideas might seem straightforward, their development is non-trivial. For example, a critical property that must be preserved in TSW-SL variants is the injectivity of the corresponding Radon Transform. For these reasons, we believe the paper should be evaluated by the development of tree systems and the corresponding Radon Transform aspects (*i.e., compare apple-to-apple*), rather than focusing on numerical comparisons with state-of-the-art methods.\"]}",
"{\"metareview\": \"This paper introduces the Tree-Sliced Wasserstein distance on Systems of Lines (TSW-SL), a novel Optimal Transport (OT) variant that replaces one-dimensional projections in Sliced Wasserstein (SW) with tree systems to better preserve topological information. It provides a theoretical framework for tree systems, including their topological properties, splitting maps as projection mechanisms, and a Radon transform for injectivity. Experiments on tasks like gradient flows, image style transfer, and generative models are shown to perform similarly or favorably over SW and its variants.\\n\\nReviews for this paper are mixed. All the reviewers agree that the paper present an original novel formulation of SW, that is theoretically consistent. Nonetheless, after considering the different elements of discussions during the rebuttal, I concur with reviewer MtTT and 7kaA that the experiments are not fully convincing to show the superiority of TSW-SL compared to other variants of SW, nor it is demonstrated in a ML setting where its usage would be beneficial. For this reason, I am enclined to give a reject option. I encourage the authors to strengthen their papers by finding a ML setting where a clear advantage is shown for TSW-SL, not only with regards to variants of SW, but also with other distributions divergences or distances.\", \"additional_comments_on_reviewer_discussion\": \"Discussion were quite lengthy, but some of the critics remained unadressed by authors.\"}",
"{\"summary\": \"The authors advance the concept of Sliced Wasserstein (SW) distance, which involves projecting data onto one-dimensional lines and leveraging closed-form expressions for one dimensional optimal transport to reduce computational costs. The authors propose an alternative projection onto tree systems, asserting that this approach preserves topological information. Additionally, they rigorously demonstrate that their framework yields an efficient metric for comparing measures. They also present extensive experimental results showcasing the superior performance of their method compared to existing methods based on SW-type distances.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The derivation is clear and well-structured, providing a thoroughly developed framework. The authors demonstrate improved performance relative to previous related methods while maintaining lightweight computation. The code is provided.\", \"weaknesses\": \"Although the authors present a well-founded theoretical framework and demonstrate practical improvements on several setups, the proposed method essentially introduces (yet) another distance metric akin to the Sliced Wasserstein distance. The overall paper seems to be a rather straightforward and incremental extension of the prior related works in the field. That is, I am rather skeptical about the significance and impact of the construction proposed here on the field of ML.\\n\\nIt seems to me that the main purpose of the proposed TSW-SL seems to be to compare the probability distributions, a feature which is primary needed for GAN training nowadays (the other experiments in the paper with gradient flows, etc. are toy, so I do not consider them as particularly demonstrative and serious). Here I have a general concern regarding the usefulness of the proposed TSW-SL in GANs. The experiments with GANs indeed show some practical improvement in the task of image generation. However, according to the famous work [1] tuning GANs is more about tuning various hyperparameters rather than changing the particular loss functions. This is also confirmed by the fact that most more practically-related papers (from CVPR, ECCV, etc.) still rely on vanilla/NS/WGAN loss with additional regularizations rather than other more complex losses (like SW-based) proposed by the community later. This suggests that (in 2024) the contribution of the current paper (TSW-SL as a GAN loss) may be relatively minor, so I am more on the negative side about the paper.\\n\\nAdditionally, the experiments with GANs lack details and sufficient explanations. For example, it is not clear why the problem in lines (Appendix) 1501-1505 is a valid GAN optimization problem. In the DDGAN experiment, how exactly the DDGAN loss is used in TSW is also not explained in detail, while it should: the DD-GAN loss is a non-trivial model due to various conditioning (discriminator is conditioned on the point, on the time moment). How this conditioning fits to the the proposed TSW-SL framework remains explained (line 1618 is not enough).\\n\\n[1] Lucic, Mario, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. \\\"Are gans created equal? a large-scale study.\\\" Advances in neural information processing systems 31 (2018).\", \"questions\": [\"I would like to see more experimental results and discussion regarding the GAN training.\", \"Could you explain how does your training time (time per iteration, per epoch, overall time to convergence) compares with the one from DDGAN and the other basedlines considered in the experiments.\", \"Could you please provide the convergence plots (FID as a function of epoch, e.g., once per several epochs) for your model vs. DDGAN vs. some other SW baseline? I would like to understand how stable is the overall training of your model compared with the baselines.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\", \"details_of_ethics_concerns\": \"None\"}",
"{\"comment\": \"**Q5. It is stated in the beginning of section 6 that \\\"the root is positioned near the mean of the target distribution\\\": what is the impact of this setting?**\\n\\n**Answer:** Through empirical observations, we discovered that initializing the tree root near the mean of the target distribution results in more consistent performance for our models.\\n\\nWe believe a thorough investigation into the analytical and statistical properties of TSW-SL is necessary to provide a theoretical explanation for these findings. However, since the paper primarily concentrates on the construction of tree systems and the associated Radon Transform, and given its already extensive scope, we have decided to defer these aspects of TSW-SL to future research.\\n\\n---\\n\\n**Comment 1. In the gradient flow experiment, differences in ground cost (e.g., \\\\(L_2\\\\), tree metric) between methods make it challenging to compare results at a fixed number of iterations.**\\n\\n**Answer.** As described in Section 6.1 of the main text (Lines 408\\u2013413), the objective of this task is to update the source distribution to make it as close to the target distribution as possible. The empirical advantage of our TSW-SL over other SW-variants is demonstrated by measuring the 2-Wasserstein distance between the source and target distributions at steps 500, 1000, 1500, 2000, and 2500. In this task:\\n\\n- TSW-SL, MaxTSW-SL, and other SW-variants are used as the loss function.\\n- The 2-Wasserstein distance serves as the evaluation metric.\\n\\nIt is crucial to note that the roles of the 2-Wasserstein distance (evaluation metric) and the other distances, such as TSW-SL, MaxTSW-SL, and SW-variants (loss functions), are separate. Hence, there is no difference when evaluating the distance between the updated source distribution and the target distribution.\\n\\n**Comment 2.** **Table 1:** **By *Wasserstein distance*, do you mean \\\\(W_2\\\\)? Additionally, could you clarify the timings reported in Table 2?**\\n\\n**Answer.** The Wasserstein mentioned in our paper is indeed $2-$Wasserstein distance $W_2$.\\nIn Table 2, the number reported is the average $2$-Wassersein distance over 10 runs. We have also added the average time to calculate the distance at one iteration over 10 runs in Table 2 of our manuscripts. The timings reported in Table 1 and Table 2 suggests that although the time per iteration for TSW-SL is slightly higher, the substantial reduction in Wasserstein distance underscores its efficiency across both datasets.\\n\\n\\n*Table 2: Average Wasserstein distance between source and target distributions of 10 runs on high-dimensional datasets.*\\n| Number of dimensions | Iteration 0 | Iteration 0 | Iteration 500 | Iteration 500 | Iteration 1000 | Iteration 1000 | Iteration 1500 | Iteration 1500 | Iteration 2000 | Iteration 2000 | Iteration 2500 | Iteration 2500 | Time/Iter(s) | Time/Iter(s)\\n|----------------------|-------------|---------------|---------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|---------------|\\n| | SW | TSW-SL | SW | TSW-SL | SW | TSW-SL | SW | TSW-SL | SW | TSW-SL | SW | TSW-SL | SW | TSW-SL\\n| 10 | 6.41 | 6.41 | 4.32e-3 | **2.81e-3** | 2.94e-3 | **2.00e-3** | 2.81e-3 | **1.55e-3** | 2.23e-3 | **1.59e-3** | 2.28e-3 | **1.75e-3** | 0.010 | 0.015 |\\n| 50 | 42.72 | 42.72 | 50.41 | **39.26** | 45.69 | **21.91** | 42.56 | **11.91** | 38.81 | **4.08** | 35.75 | **1.72** | 0.014 | 0.018 |\\n| 75 | 69.06 | 69.06 | 92.39 | **79.71** | 90.79 | **67.99** | 90.07 | **53.92** | 86.58 | **44.91** | 90.31 | **31.61** | 0.015 | 0.018 |\\n| 100 | 91.5 | 91.5 | 130.12 | **117.66** | 128.13 | **103.23** | 128.58 | **93.41** | 129.80 | **80.46** | 128.29 | **75.28** | 0.018 | 0.019 |\\n| 150 | 142.54 | 142.54 | 214.09 | **203.30** | 213.71 | **190.62** | 215.05 | **186.77** | 212.90 | **183.52** | 216.32 | **182.63** | 0.020 | 0.022 |\\n| 200 | 192.52 | 192.52 | 302.84 | **289.83** | 301.35 | **283.34** | 303.07 | **276.94** | 302.70 | **279.24** | 301.51 | **279.08** | 0.020 | 0.021 |\"}",
"{\"title\": \"Any Questions from Reviewer HzfV on Our Rebuttal?\", \"comment\": \"We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.\\n\\nWe would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.\\n\\nWe would be happy to do any follow-up discussion or address any additional comments.\"}",
"{\"comment\": \"We appreciate the reviewer\\u2019s feedback and have provided the following responses to address the concerns raised about our paper. Below, we summarize the weaknesses and questions highlighted by the reviewer and provide our answers accordingly.\\n\\n---\\n\\n**W2. There is also no discussion regarding the impact of the number of lines, which seems to be a critical aspect of the method\\u2014especially as using only one line recovers the standard SW. Additionally, the influence of the splitting maps is not addressed, despite being central to the new Radon transform.**\\n\\n**Answer.** Given that the complexity of TSW-SL, as presented in Section 5.1 of our main text, is $\\\\mathcal{O}(Lknlog\\u2061n+Lkdn)$ ($L$ is the number of trees, k is the number of lines per tree, n is the number of supports for each distribution, and d is the number of dimensions in the original space), while the computational complexity of SW is $\\\\mathcal{O}(Lnlog\\u2061n+Ldn)$ ($L$ is the total number of projection directions in SW), we conducted experiments such that the total number of projection directions in SW equals the product of the number of trees and lines in TSW-SL for a fair comparison.\\n\\nFor example, when we set the total number of projection directions in SW to $50$ in Section 6.3, we conducted experiments with TSW-SL using $3$ and $5$ lines per tree, selecting the number of lines accordingly to ensure that the total number of projection directions for both TSW-SL and SW was approximately the same. Additional experimental results on the impact of the number of lines per tree are provided in Table 5 in Appendix E.3, where we explore different configurations with varying numbers of lines per tree in TSW-SL.\\n\\nThe results show that although the number of lines in each tree does have a different impact on performance, given the same number of total projection directions, our TSW-SL consistently yields better results compared to SW and Orthogonal SW.\\n\\n**Q1 + Q2. What are the main arguments in favor of TSW-SL wrt other many variants of SW? Could you highlight the unique advantages of TSW-SL?\\\"**\\n\\n**Can you give some stronger argument to the claim that tree systems allow avoiding a loss of topological information?**\\n\\n**Answer.** The motivation for this paper arose from a simple yet intriguing idea: In the framework of Sliced Wasserstein (SW), a probability distribution on $\\\\mathbb{R}^d$ is pushed forward onto a line. This raises the question: what does the resulting distribution reveal about the original one? It is evident that distinct distributions, when projected onto the same line, can become indistinguishable.\\n\\nNow, let us compare a tree system composed of two lines, $a$ and $b$ in $\\\\mathbb{R}^2$, along with a distribution $\\\\mu$ on $\\\\mathbb{R}^2$. For simplicity, assume that $\\\\mu$ is a Dirac delta distribution.\\n\\n- Many Dirac delta distributions in $\\\\mathbb{R}^2$ become identical after being projected onto line $a$, and the same holds true for line $b$. However, in most cases, the projections of $\\\\mu$ on both lines of the tree system can uniquely identify the original Dirac delta distribution. (\\\"Most cases\\\" excludes exceptional scenarios, such as when $a$ and $b$ are the same line.)\\n- The above reasoning might appear insufficient because it evaluates the tree system by considering only one of its lines at a time. What happens if we evaluate the tree system using both lines together? This is where the splitting map becomes essential. The splitting map enables a more versatile allocation of mass between the two lines, rather than concentrating all the mass onto one line.\\n\\nIn summary, with the same number of lines\\u2014and thus the same computational cost\\u2014tree systems in TSW-SL provide a significantly deeper understanding of probability distributions compared to individual lines in SW.\", \"a_natural_question_arises\": \"if a better representation space is desired, why not replace one-dimensional lines with higher-dimensional subspaces of $\\\\mathbb{R}^d$? The answer lies in computational feasibility. Optimal Transport in $\\\\mathbb{R}^d$ for $d>1$ is computationally prohibitive due to the lack of a closed-form solution. In contrast, both SW and TSW-SL offer efficient closed-form expressions, making them more practical.\\n\\nWe believe this explanation adequately addresses the reviewer's concerns.\"}",
"{\"summary\": \"This paper introduces a new tree Sliced-Wasserstein distance. This new distance is obtained by projecting on trees instead of 1d lines. The authors provide a complete theoretical analysis of the tree structures introduced, of the Radon transform used to project on them, as well as a discussion on how to sample such trees. Finally, they compare their method with the original Sliced-Wasserstein distance and some of their variants on several tasks such as gradient flows, color transfer, generative modeling with SWGAN and for diffusion models.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"This paper provides a new tree Sliced-Wasserstein distance, which in contrast with previous works, project on trees, and is thus more flexible.\\n\\nThe authors provide first a full theoretical analysis of the tree space introduced, as well as a way to sample them in practice. Defining the right Radon transform, this analysis allows defining naturally a Tree Sliced-Wasserstein distance, which generalizes the original Sliced-Wasserstein distance. They show that this is well a metric, and that it can be approximated efficiently in the same complexity as Sliced-Wasserstein (w.r.t samples). This construction is thus original, well motivated and with good properties.\\n\\nFinally, they compare their distance with Sliced-Wasserstein and variants thereof, on different experiments. On these different tasks, they show consistently better performances than Sliced-Wasserstein. Moreover, their experiments are done on classical baselines such as gradient flows on 2D datasets and higher dimensional gaussians, or color transfer, but also on high dimensional data such as images with generative modeling.\", \"weaknesses\": \"Some minor weaknesses are that there are no statistical analysis for the Tree Sliced-Wasserstein distance proposed, e.g. no sample complexity, nor topological analysis. One can wonder how it relates with other distances, it TSW-SL a lower-bound of the Sliced-Wasserstein or of the Wasserstein distance?\\n\\nThe theoretical section on trees is very interesting, but some parts are challenging to read (for instance Section 3.2). I note however that an effort has been made on being clear, notably through Figure 1 and 2, which help to understand the constructions.\", \"questions\": \"When citing references about work which project on lower dimensional subspaces, some references are missing such as [1,2,3].\\n\\nI think in Equation (2), there is an abuse of notation as the Radon transform of a density is a density, and not a measure.\\n\\nThe construction given in Algorithm 1 only samples a chain tree. Did you also test with sampling trees where nodes have different childs? \\n\\nFor the experiments on SW generators (Section 6.3), the number of projections (50 and 500) seems very low compared to the original paper, where I think they use something like 100K projections.\", \"typos\": \"- Line 102: \\\"(Helgason & Helgason, 2011)\\\" -> (Helgason, 2011)\\n- Line 322: \\\"If tree systems in $\\\\mathbb{T}$ consists only one line\\\"\\n\\n\\n[1] Lin, T., Zheng, Z., Chen, E., Cuturi, M., & Jordan, M. I. (2021, March). On projection robust optimal transport: Sample complexity and model misspecification. In International Conference on Artificial Intelligence and Statistics (pp. 262-270). PMLR.\\n\\n[2] Huang, M., Ma, S., & Lai, L. (2021, July). A riemannian block coordinate descent method for computing the projection robust wasserstein distance. In International Conference on Machine Learning (pp. 4446-4455). PMLR.\\n\\n[3] Muzellec, B., & Cuturi, M. (2019). Subspace detours: Building transport plans that are optimal on subspace projections. Advances in Neural Information Processing Systems, 32.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces a variant of the Sliced-Wasserstein distance that, rather than projecting distributions onto lines, projects them onto tree systems. This approach leverages the closed-form solution of optimal transport (OT) for trees to develop a fast algorithm based on the Radon transform applied to tree structures. It uses splitting maps to define how distributions are projected onto these structures. This tree-sliced Wasserstein distance on a System of Lines (TSW-SL) is shown to be a metric. Experiments compare the original SW with TSW-SL, demonstrating that TSW-SL provides slightly better performance than SW, with comparable computation times.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Paper introduces a new variant of sliced-Wasserstein: relying on a tree metric is original and allows benefiting from closed-form solution that exists, allowing preserving computational efficiency. The paper is clear, well illustrated, experimental section validates the approach in different contexts.\", \"weaknesses\": [\"**Note:** I have reviewed an extension of TSW-SL for the same venue.\", \"As noted in the conclusion, the paper *\\\"introduces a straightforward alternative to SW by replacing one-dimensional lines with tree systems,\\\"* aiming to provide a more geometrically meaningful space. This objective aligns with several SW variants (as mentioned in the related work section of the introduction) that have achieved performance improvements in learning tasks involving SW and/or offer alternative schemes with distinct properties (e.g., statistical characteristics, behavior in high-dimensional spaces, the ability to provide a transport plan). However, given that TSW-SL achieves similar (or slightly better) performance to SW while retaining the same limitations (e.g., difficulties in sampling meaningful lines in high dimensions, as noted in Table 2), it remains unclear why and under what circumstances TSW-SL should be preferred over SW or its many variants.\", \"There is also no discussion regarding the impact of the number of lines, which seems to be a critical aspect of the method\\u2014especially as using only one line recovers the standard SW. Additionally, the influence of the splitting maps is not addressed, despite being central to the new Radon transform.\", \"**Other Comments:**\", \"In the gradient flow experiment, differences in ground cost (e.g., $L_2$, tree metric) between methods make it challenging to compare results at a fixed number of iterations.\", \"Table 1: By *Wasserstein distance,* do you mean $W_2$? Additionally, could you clarify the timings reported in Table 2?\", \"Line 467: The loss should not be denoted as $\\\\mathcal{L}$ since this letter represents a line.\", \"Section 6.2: It is difficult to visually confirm that *TSW-SL produces images that most closely resemble the target.*\", \"Appendix p.17, line 870: $\\\\overline{L}$ should be $\\\\overline{\\\\mathcal{L}}$.\", \"Example A.14 is unclear: $n_i $ should have length 6 (i.e.,$i = 0$ to 5). What do the arrows signify? Additionally, an arrow seems to be missing on line 1035. The progression through step $i$ is confusing.\", \"Page 25: Please provide more detail about the transition from eq. (46) to eq. (47).\"], \"questions\": [\"What are the main arguments in favor of TSW-SL wrt other many variants of SW? Could you highlight the unique advantages of TSW-SL?\", \"Can you give some stronger argument to the claim that tree systems allow avoiding a loss of topological information? Is this claim related with the number of lines that should be sampled ? What is the impact of this parameter onto the perfomances?\", \"Is it possible to derive additional properties of TSW-SL? For instance, does it metrizes the weak convergence? What are its statistical properties?\", \"It is stated in the conclusion that improved performances are expected by adapting recent advanced sampling schemes or techniques of SW to TSW-SL. It seems that the extension is not that straightforward: can you comment on that, and how improved performances can be expected?\", \"It is stated in the beginning of section 6 that \\\"*the root is positioned near the mean of the target distribution*\\\": what is the impact of this setting?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official comments by Authors\", \"comment\": \"**References.**\\n\\n[1] Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, and Gustavo Rohde. Generalized sliced wasserstein distances. Advances in neural information processing systems, 32, 2019.\\n\\n[2] Khai Nguyen, Shujian Zhang, Tam Le, and Nhat Ho. Sliced wasserstein with random-path projecting directions. In Forty-first International Conference on Machine Learning, 2024.\"}",
"{\"title\": \"Any Questions about Our Rebuttal?\", \"comment\": \"Dear Reviewers,\\n\\nWe would like to thank you very much for your feedback, and we hope that our response addresses your previous concerns. In case you have not responded to our rebuttal so far, please feel free to let us know if you have any further comments on our work as the discussion phase is expected to conclude in the next few days. We would be more than happy to address any additional concerns from you.\\n\\nThank you again for spending time on the paper. We really appreciate that!\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"comment\": \"**References.**\\n\\n[1] Khai Nguyen, Nicola Bariletto, and Nhat Ho. Quasi-monte carlo for 3d sliced wasserstein. In The Twelfth International Conference on Learning Representations, 2024.\\n\\n[2] Khai Nguyen, Nhat Ho, Tung Pham, and Hung Bui. Distributional sliced-wasserstein and applications to generative modeling. In International Conference on Learning Representations, 2021.\\n\\n[3] Kimia Nadjahi, Alain Durmus, Pierre E Jacob, Roland Badeau, and Umut Simsekli. Fast approximation of the sliced-wasserstein distance using concentration of random projections. Advances in Neural Information Processing Systems, 34:12411\\u201312424, 2021.\\n\\n[4] Ishan Deshpande, Yuan-Ting Hu, Ruoyu Sun, Ayis Pyrros, Nasir Siddiqui, Sanmi Koyejo, Zhizhen Zhao, David Forsyth, and Alexander G Schwing. Max-sliced wasserstein distance and its use for gans. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10648\\u201310656, 2019.\\n\\n[5] Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, and Gustavo Rohde. Generalized sliced wasserstein distances. Advances in neural information processing systems, 32, 2019.\\n\\n[6] Cl\\u00e9ment Bonet, Laetitia Chapel, Lucas Drumetz, and Nicolas Courty. Hyperbolic sliced-wasserstein via geodesic and horospherical projections. In Topological, Algebraic and Geometric Learning Workshops 2023, pages 334\\u2013370. PMLR, 2023\\n\\n[7] David Alvarez-Melis, Tommi Jaakkola, and Stefanie Jegelka. Structured optimal transport. In International conference on artificial intelligence and statistics, pages 1771\\u20131780. PMLR, 2018.\\n\\n[8] Franc\\u00b8ois-Pierre Paty and Marco Cuturi. Subspace robust wasserstein distances. In International conference on machine learning, pages 5072\\u20135081. PMLR, 2019.\\n\\n[9] Jonathan Niles-Weed and Philippe Rigollet. Estimation of wasserstein distances in the spiked transport model. Bernoulli, 28(4):2663\\u20132688, 2022.\\n\\n[10] Khai Nguyen, Shujian Zhang, Tam Le, and Nhat Ho. Sliced wasserstein with random-path projecting directions. In Forty-first International Conference on Machine Learning, 2024.\\n\\n[11] K. Nguyen and N. Ho. Sliced wasserstein estimation with control variates. In The Twelfth International Conference on Learning Representations, 2024.\"}",
"{\"comment\": \"**Q2. Can you give some stronger argument to the claim that tree systems allow avoiding a loss of topological information? Is this claim related with the number of lines that should be sampled ? What is the impact of this parameter onto the perfomances?**\\n\\n**Answer.** Given that the complexity of TSW-SL, as presented in Section 5.1 of our main text, is $\\\\mathcal{O}(Lknlog\\u2061n+Lkdn)$ (L is the number of trees, k is the number of lines per tree, n is the number of supports for each distribution, and d is the number of dimensions in the original space), while the computational complexity of SW is $\\\\mathcal{O}(Lnlog\\u2061n+Ldn)$ ($L$ is the total number of projection directions in SW), we conducted experiments such that the total number of projection directions in SW equals the product of the number of trees and lines in TSW-SL for a fair comparison. \\nFor example, when we set the total number of projection directions in SW to 50 in Section 6.3, we conducted experiments with TSW-SL using $3$ and $5$ lines per tree, selecting the number of lines accordingly to ensure that the total number of projection directions for both TSW-SL and SW was approximately the same. Additional experimental results on the impact of the number of lines per tree are provided in Table 5 in Appendix E.3, where we explore different configurations with varying numbers of lines per tree in TSW-SL.\\nThe results show that although the number of lines in each tree does have a different impact on performance, given the same number of total projection directions, our TSW-SL consistently yields better results compared to SW and Orthogonal SW.\\n\\n\\n\\n**Q3. Is it possible to derive additional properties of TSW-SL? For instance, does it metrizes the weak convergence? What are its statistical properties?**\\n\\n**Answer:** Given the paper's emphasis on the construction of tree systems and the corresponding Radon Transform, and considering the already extensive content, we have chosen to defer the analytical and statistical examination of TSW-SL to future work.\\n\\nIt is important to highlight that analyzing these aspects of TSW-SL presents unique challenges due to the inclusion of splitting maps. This feature is exclusive to Tree-Sliced Wasserstein variants and sets them apart from Sliced Wasserstein variants. We are actively investigating the properties of splitting maps, which appear to be a highly promising research direction.\"}",
"{\"comment\": \"**Q1. Could you explain how your training time (time per iteration, per epoch, overall time to convergence) compares with the one from DDGAN and the other baselines considered in the experiments.**\\n\\n**Answer.** Regarding the training time of our methods in DDGAN and other baselines in Table 4, we have already provided the training time per epoch of our methods compared with other baselines. We have further provided the training time per iteration as a new column in Table 4 of the revision of our paper. In terms of overall time to converge, it is impossible for us to provide in this discussion phase due to lack of time and resources since **it requires reproducing the full training of all baselines mentioned in [1]**. We will include the overall time to convergence of the remaining baseline in the final revision of our manuscript.\\n\\n**Q2. Could you please provide the convergence plots (FID as a function of epoch, e.g., once per several epochs) for your model vs. DDGAN vs. some other SW baseline? I would like to understand how stable is the overall training of your model compared with the baselines.**\\n\\n**Answer.** We have added the FID plots of our TSW-SL-DD compared to SW-DD in Figure 9 of Appendix E.4 of the revision of our paper. The results show that our TSW-SL-DD achieves a greater reduction in FID scores compared to SW-DD during the final 300 epochs. Due to the lack of time and resources during the discussion periods, we cannot provide FID score over epochs of other SW variants reported in the papers (since we follow the results reported in [10] to compare with our TSW-SL-DD results). Additionally, **reproducing all baselines for DDGAN experiments would require time and resources far beyond what is feasible during the discussion period**. We will include the FID over epochs of the remaining baselines in the final revision of our manuscript.\\n\\n---\\n\\nWe sincerely thank the reviewer for the valuable feedback. If our responses adequately address all the concerns raised, we kindly hope the reviewer will consider raising the score of our paper.\"}",
"{\"comment\": \"We appreciate the reviewer\\u2019s feedback and have provided the following responses to address the concerns raised about our paper. Below, we summarize the weaknesses and questions highlighted by the reviewer and provide our answers accordingly.\\n\\n---\\n\\n**W1. Some minor weaknesses are that there are no statistical analysis for the Tree Sliced-Wasserstein distance proposed, e.g. no sample complexity, nor topological analysis. One can wonder how it relates with other distances, it TSW-SL a lower-bound of the Sliced-Wasserstein or of the Wasserstein distance?**\\n\\n**Answer.** Given the paper's emphasis on the construction of tree systems and the corresponding Radon Transform, and considering the already extensive content, we have chosen to defer the analytical and statistical examination of TSW-SL to future work.\\n\\nIt is important to highlight that analyzing these aspects of TSW-SL presents unique challenges due to the inclusion of splitting maps. This feature is exclusive to Tree-Sliced Wasserstein variants and sets them apart from Sliced Wasserstein variants. We are actively investigating the properties of splitting maps, which appear to be a highly promising research direction.\\n\\n**Q1. When citing references about work which project on lower dimensional subspaces, some references are missing such as [1,2,3].**\\n\\n**Answer.** We appreciate the reviewer for highlighting those citations. We have incorporated them into our revised paper (Line 57).\\n\\n**Q2. I think in Equation (2), there is an abuse of notation as the Radon transform of a density is a density, and not a measure.**\\n\\n**Answer.** We acknowledge the reviewer's observation regarding the misuse of notation. We have carefully reviewed the paper and addressed similar errors in the revision.\\n\\n**Q3. The construction given in Algorithm 1 only samples a chain tree. Did you also test with sampling trees where nodes have different childs?**\\n\\n**Answer.** We evaluated the performance of models using alternative tree structures and found their performance to be comparable. Since the chain tree structure offers a straightforward approach for sampling trees and calculating Eq. (13), we opted to focus exclusively on the chain tree structure in the main body of the paper.\\n\\nConsidering the paper's emphasis on the construction of tree systems and the associated Radon Transform, and given its already comprehensive content, we have chosen to reserve further analysis on tree sampling and other aspects of TSW-SL for future work.\\n\\n**Q4. For the experiments on SW generators (Section 6.3), the number of projections (50 and 500) seems very low compared to the original paper, where I think they use something like 100K projections.**\\n\\n**Answer.** In the GAN experiment described in Section 6.3, we evaluated the performance of our TSW-SL method in comparison with SW using a total of 50 and 500 projection directions. In the existing literature, [1] also introduced SW with control variates for GANs and tested it using 10 and 1000 projection directions. Interestingly, we observed that increasing the number of projection directions from 500 to 1000 does not improve the FID score. For instance, [1] reported an FID score of 10.05 on CelebA when using SW with 1000 projection directions, which is higher than the FID score of 9.62 achieved with SW using 500 projection directions, as detailed in Section 6.3 of our paper. Therefore, we choose to conduct experiments to compare TSW-SL and SW with 50 and 500 projecting directions.\\n\\n---\\n\\n**Reference.**\\n\\n[1] Khai Nguyen and Nhat Ho. Sliced wasserstein estimation with control variates. In The Twelfth International Conference on Learning Representations, 2024.\\n\\n---\\n\\nOnce again, we sincerely thank the reviewer for their feedback. Please let us know if there are any additional concerns or questions from the reviewer regarding our paper.\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"**References.**\\n\\n[1] Deshpande, I., Zhang, Z., & Schwing, A. G. (2018). Generative modeling using the sliced wasserstein distance. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3483-3491).\\n\\n[2] Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, and Gustavo Rohde. Generalized sliced wasserstein distances. Advances in neural information processing systems, 32, 2019.\\n\\n[3] Khai Nguyen, Shujian Zhang, Tam Le, and Nhat Ho. Sliced wasserstein with random-path projecting directions. In Forty-first International Conference on Machine Learning, 2024.\"}",
"{\"title\": \"Thanks for your endorsement!\", \"comment\": \"Thanks for your response, and we appreciate your endorsement.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Any Questions from Reviewer tDY2 on Our Rebuttal?\", \"comment\": \"We would like to thank the reviewer again for your thoughtful reviews and valuable feedback.\\n\\nWe would appreciate it if you could let us know if our responses have addressed your concerns and whether you still have any other questions about our rebuttal.\\n\\nWe would be happy to do any follow-up discussion or address any additional comments.\"}",
"{\"title\": \"Summary of Revisions\", \"comment\": \"Incorporating comments and suggestions from reviewers, as well as some further empirical studies we believe informative, we summarize here the main changes in the revised paper:\\n\\n- We added an additional example of the color-transfer task in **Appendix E.2** to clearly demonstrate the qualitative advantages of our TSW-SL loss over other SW-variant losses. \\n- We added a new column to **Table 4** (Denoising Diffusion Model task) to report the training time per iteration for each method. \\n- We included the time per iteration of gradient flow task for high-dimensional dataset in **Table 2**. \\n- We included the FID plot over epochs for DDGAN using TSW-SL and SW to showcase the empirical advantages of our method in terms of convergence rate. \\n- We included the explanation of our method (TSW-SL) into Denoising Diffusion Model in **Appendix E.4, line 1632.**\\n- We corrected typos in **lines 467** and **870**, clarified **Example A.14** (as noted by Reviewer *7kaA*), corrected the typo in **line 322** (as suggested by Reviewer *HzfV*), and added references [1], [2], and [3] in **line 57** based on the recommendations of Reviewer *HzfV*. \\n\\n---\\n\\n**Reference.**\\n\\n[1] Lin, T., Zheng, Z., Chen, E., Cuturi, M., & Jordan, M. I. (2021, March). On projection robust optimal transport: Sample complexity and model misspecification. In International Conference on Artificial Intelligence and Statistics (pp. 262-270). PMLR.\\n\\n[2] Huang, M., Ma, S., & Lai, L. (2021, July). A riemannian block coordinate descent method for computing the projection robust wasserstein distance. In International Conference on Machine Learning (pp. 4446-4455). PMLR.\\n\\n[3] Muzellec, B., & Cuturi, M. (2019). Subspace detours: Building transport plans that are optimal on subspace projections. Advances in Neural Information Processing Systems, 32.\"}",
"{\"title\": \"Discussion Phase Summary\", \"comment\": \"Dear Area Chair and Reviewers,\\n\\nAs the discussion phase comes to a close, we would like to summarize the key contributions of our paper.\\n\\nIn this work, we introduce the Tree-Sliced Wasserstein (TSW) distance, an extension of the well-known Sliced Wasserstein (SW) distance. Instead of relying on traditional one-dimensional lines, the TSW distance utilizes a more intricate integration domain, referred to as *tree systems*. In essence, tree systems are structures where dimensional lines are interconnected, behaving locally as lines.\\n\\nThis innovative framework paves the way for the introduction of the *Tree-Sliced Wasserstein on Systems of Lines* (TSW-SL) distance, facilitating the application of Tree-Sliced distance to **dynamic-support problems** (e.g., GANs and Diffusion models). Addressing such applications has been a significant open challenge since the concept of Tree-Slicing was introduced in NeurIPS 2019 ([1]). To our knowledge, no successful attempts have been made to tackle this problem until now. Our work addresses this gap and opens new avenues for applying TSW to these complex tasks.\\n\\nOur experimental results demonstrate the feasibility of the proposed method. While this is a foundational study, we show that TSW-SL outperforms traditional SW under equivalent computational costs. It is important to note that our primary goal is to establish a theoretical framework for TSW rather than competing with state-of-the-art SW-based methods. Furthermore, since tree systems are locally composed of lines, we anticipate that techniques developed for SW can be adapted to tree systems, potentially offering further improvements. This potential is discussed in more detail in our General Response.\\n\\nFinally, the concepts introduced in this work serve as the foundation for extensions of the TSW-SL framework, which have also been submitted to this venue. Two examples of these extensions can be found [here](https://openreview.net/forum?id=OiQttMHwce) (referenced by Reviewer 7kaA) and also [here](https://openreview.net/forum?id=FPQzXME9NK). In these submissions, the proposed methods demonstrate either superior performance or results comparable to state-of-the-art SW approaches in their respective tasks.\\n\\nWe hope this summary helps the Area Chair and Reviewers form an accurate evaluation of the theoretical contributions presented in our paper.\\n\\nBest regards,\\n\\nAuthors\\n\\n---\\n\\n**Reference.**\\n\\n[1] Tam Le, Makoto Yamada, Kenji Fukumizu, and Marco Cuturi. Tree-sliced variants of Wasserstein distances. NeurIPS, 2019.\"}",
"{\"title\": \"Official Response to Authors\", \"comment\": \"Thank you for addressing my concerns. I'm raising my score.\"}",
"{\"comment\": \"We sincerely appreciate the reviewer\\u2019s thoughtful feedback and have provided the following responses to address the concerns raised about our paper. Below, we begin with a general reply and then proceed to address each of the reviewer\\u2019s questions individually.\\n\\n---\\n\\nIt appears to us that the reviewer may not be fully familiar with the context of Optimal Transport, particularly the specific topic of our paper, which focuses on the sliced variants of the Wasserstein distance in Optimal Transport.\\n\\n> It seems to me that the main purpose of the proposed TSW-SL seems to be to compare the probability distributions, a feature which is primary needed for GAN training nowadays\\n\\nThe Wasserstein distance is known to have a supercubic computational complexity concerning the number of supports in the input measures. Specifically, for probability measures with at most $n$ supports, its computational complexity is $\\\\mathcal{O}(n^3 \\\\log n)$ [1]. This high computational cost has driven the development of sliced variants in Optimal Transport, which provide more computationally efficient alternatives to the original Wasserstein distance.\\n\\nIn the case of the original Sliced Wasserstein distance, the computational complexity is reduced to $\\\\mathcal{O}(L n \\\\log n + Ldn)$, where $d$ is the dimension of the supports and $L$ is the number of samples used in the Monte Carlo approximation. This represents a significant improvement in computational efficiency, reducing the complexity from $n^3 \\\\log n$ to $n \\\\log n$.\\n\\nIn our TSW-SL distance, the computational complexity is $\\\\mathcal{O}(Lkn \\\\log n + Lkdn)$, where $L$ is the number of samples in the Monte Carlo approximation, $k$ represents the number of tree levels, $n$ is the number of supports, and $d$ is the dimensionality of the supports. This complexity retains the same order with respect to the number of supports $n$, making it computationally efficient while incorporating additional structural information through the tree system.\\n\\n>This is also confirmed by the fact that most more practically-related papers (from CVPR, ECCV, etc.) still rely on vanilla/NS/WGAN loss with additional regularizations rather than other more complex losses (like SW-based) proposed by the community later. This suggests that (in 2024) the contribution of the current paper (TSW-SL as a GAN loss) may be relatively minor, so I am more on the negative side about the paper.\\n\\nIt is important to clarify that **the primary contribution of our paper is not about introducing TSW-SL as a GAN loss** as the reviewer stated. Instead, the paper focuses on introducing a novel integration domain, referred to as tree systems, as a replacement for the traditional lines used in the Sliced Wasserstein framework. The GAN experiments in our work primarily serve to highlight the advantages of this new integration domain when compared to lines.\\n\\nAs stated in our paper (Line 377): \\\"It is worth noting that the paper presents a simple alternative by substituting lines in SW with tree systems, focusing mainly on comparing TSW-SL with the original SW, without expecting TSW-SL to outperform more recent SW variants.\\\"\\n\\nImproving existing components in the SW framework ([1], [2], [3], [4], [5], [6], etc.) and extending one-dimensional lines to more complex domains such as one-dimensional manifolds or low-dimensional subspaces ([6], [7], [8], [9], etc.) are active and evolving areas of research in the Machine Learning community. Hence, we believe it is not entirely fair to evaluate a theoretical contribution like ours solely on one specific task. Instead, the focus should be on the generality and potential of the introduced approach.\"}"
]
} |
EKJhH5D5wA | SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration | [
"Heming Xia",
"Yongqi Li",
"Jun Zhang",
"Cunxiao Du",
"Wenjie Li"
] | Speculative decoding (SD) has emerged as a widely used paradigm to accelerate LLM inference without compromising quality. It works by first employing a compact model to draft multiple tokens efficiently and then using the target LLM to verify them in parallel. While this technique has achieved notable speedups, most existing approaches necessitate either additional parameters or extensive training to construct effective draft models, thereby restricting their applicability across different LLMs and tasks. To address this limitation, we explore a novel plug-and-play SD solution with layer-skipping, which skips intermediate layers of the target LLM as the compact draft model. Our analysis reveals that LLMs exhibit great potential for self-acceleration through layer sparsity and the task-specific nature of this sparsity. Building on these insights, we introduce SWIFT, an on-the-fly self-speculative decoding algorithm that adaptively selects intermediate layers of LLMs to skip during inference. SWIFT does not require auxiliary models or additional training, making it a plug-and-play solution for accelerating LLM inference across diverse input data streams. Our extensive experiments across a wide range of models and downstream tasks demonstrate that SWIFT can achieve over a $1.3\times$$\sim$$1.6\times$ speedup while preserving the original distribution of the generated text. We release our code in https://github.com/hemingkx/SWIFT. | [
"Speculative Decoding",
"LLM Inference Acceleration",
"Efficient NLP"
] | Accept (Poster) | https://openreview.net/pdf?id=EKJhH5D5wA | https://openreview.net/forum?id=EKJhH5D5wA | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zPQcFWuQws",
"whevOjo4JY",
"vllIHy7Ohr",
"vaHtgvrEtH",
"v36O73hrUc",
"sUMb6qKHcl",
"sQJqNQjQM2",
"qpgNc1xm38",
"puO5TlXlmL",
"oIfLzTiljX",
"nb9yptwgIN",
"maWvamIccs",
"kn7YXzjX0u",
"jj64Y849is",
"ixjaLyFVmw",
"i9pGgXT2X2",
"hFeUSykyzj",
"eLpaB6CdlZ",
"csz5F49alS",
"brm9PmWUJa",
"b9I43l3MnS",
"aYAaxuOdvY",
"YbnJbM9H80",
"VPtIcloQRo",
"UH7hPunI0P",
"TTgYvz2xJU",
"NeoqNjb8sz",
"MiqKbrVx6s",
"MG6jckDaWZ",
"LhGs9Cph5P",
"Jy5VobWxdu",
"JAJfQtqg6W",
"HRGg5SoHLb",
"FuKpPyHDd1",
"Dx3W4mYhdF",
"Cqzmea0gVY",
"CFnOuTTZvV",
"6QJ41tlA6k",
"4r5dD7oiZY",
"3oe8O1oDtW",
"2mtNuWbdii"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment"
],
"note_created": [
1732526269027,
1731832478841,
1731832696417,
1730640971557,
1731832409363,
1731859193803,
1732526720466,
1734761681556,
1731811696594,
1732210546813,
1732677885588,
1732366002856,
1729484259511,
1732424102380,
1732808585831,
1731811553959,
1732694387649,
1731832758325,
1732609706188,
1732365818638,
1731050959308,
1732526632648,
1732547614412,
1730619117179,
1731832594858,
1732365583150,
1731859160742,
1731859109206,
1732163790270,
1732501903561,
1731811833420,
1731858990493,
1732163819747,
1731811783917,
1732365876091,
1731859058369,
1732452946722,
1732843563465,
1737523770521,
1731858909402,
1731811310779
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_tWD9"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Area_Chair_8857"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_rnoa"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_tWD9"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_vnfL"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_LGVh"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_tWD9"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_vnfL"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_rnoa"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_LGVh"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_LGVh"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Reviewer_LGVh"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6451/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer LGVh\", \"comment\": \"Thank you for your prompt feedback. Below, we provide additional experimental results and discussions to address your concerns.\\n\\n**1.Comparisons on MT-Bench in random order**\\n\\nWe conducted additional experiments comparing SWIFT and Lookahead using a random order of MT-Bench, as shown below:\\n\\n> R3-3-Table1: Experimental Results on Vicuna-v1.3 (Greedy Decoding, FP16 Precision)\\n\\n| Methods | Vicuna-7B | Vicuna-13B | Vicuna-33B |\\n| --------- | :-------: | :--------: | :--------: |\\n| Lookahead | 1.13x | 1.14x | 1.13x |\\n| SWIFT | **1.20x** | **1.27x** | **1.35x** |\\n\\nIn this experimental setting, SWIFT continues to demonstrate **superior efficiency** over Lookahead across all model sizes. Moreover, as the model size increases, SWIFT\\u2019s overall speedup **improves consistently**, aligning with the trends observed in Figure 8 of our main paper.\\n\\nWe acknowledge your concern that \\\"in real-world applications, with a massive number of user instructions, the diversity of instances is much higher.\\\" However, as we addressed in our prior response, most user instructions can be categorized into similar types based on their intent, such as reasoning, writing, coding, QA, etc. (e.g., the 8 subtasks in MT-Bench). Therefore, a potential application of SWIFT in these scenarios could involve **caching optimized layer configurations for similar data types** and retrieving them when processing corresponding instances. This approach would likely **further enhance** SWIFT\\u2019s efficiency beyond the results shown here.\\n\\nThat is to say, the speedup results in `R3-3-Table1` represent a **lower bound** for SWIFT\\u2019s efficiency, as they rely solely on instance-specific optimization. Even under these restricted conditions, SWIFT outperforms the prior plug-and-play method, Lookahead. We sincerely hope that our response could provide you a better understanding of SWIFT's value and potential for efficient LLM inference.\\n\\n**2.Additional comparisons of LLaMA-3 on MT-Bench**\\n\\nWe also conducted evaluations on MT-Bench using LLaMA-3, as presented below:\\n\\n> R3-3-Table2: Experimental results on LLaMA-3-8B-Instruct (Greedy Decoding, FP16 Precision)\\n\\n| Methods | Writing | Roleplay | Reasoning | Math | Coding | Extraction | Stem | Humanities | Overall |\\n| --------- | :-------: | :-------: | :-------: | :-------: | :-------: | :--------: | :-------: | :--------: | :-------: |\\n| Lookahead | 1.05x | 1.14x | 1.10x | 1.23x | 1.15x | 1.16x | 1.11x | 1.15x | 1.14x |\\n| SWIFT | **1.24x** | **1.26x** | **1.24x** | **1.21x** | **1.26x** | **1.30x** | **1.22x** | **1.19x** | **1.24x** |\\n\\n> R3-3-Table3: Experimental results on LLaMA-3-70B-Instruct (Greedy Decoding, FP16 Precision)\\n\\n| Methods | Writing | Roleplay | Reasoning | Math | Coding | Extraction | Stem | Humanities | Overall |\\n| --------- | :-------: | :-------: | :-------: | :-------: | :-------: | :--------: | :-------: | :--------: | :-------: |\\n| Lookahead | 1.06x | 1.15x | 1.11x | 1.22x | 1.19x | 1.14x | 1.10x | 1.14x | 1.14x |\\n| SWIFT | **1.31x** | **1.43x** | **1.36x** | **1.34x** | **1.44x** | **1.52x** | **1.33x** | **1.37x** | **1.39x** |\\n\\nThese results confirm SWIFT\\u2019s superiority over Lookahead when LLaMA-3 serves as the backbone, consistent with our findings in `R3-2-Table1` and `R3-2-Table1`. \\n\\n------\\n\\nWe hope the above demonstrations and additional experiments comprehensively address your concerns. We deeply appreciate your inquiry about SWIFT's effectiveness in real-world LLM chat applications. We will incorporate these discussions into the revised manuscript and remain open to further feedback. Please feel free to reach out with any additional concerns.\\n\\nThank you for your thoughtful review.\"}",
"{\"title\": \"Response - 2\", \"comment\": \"***Q2: The authors should also present results for methods such as Mudusa and Eagle, which require minimal training overhead. How does the overhead of the proposed layer searching algorithm compare to the overhead of training additional modules like Eagle?***\", \"a2\": \"**In fact, training-required methods such as Medusa [1] and Eagle [2] still incur substantial training costs.** We did not explicitly discuss these costs in our manuscript because the training-cost gap between training-required SD methods and plug-and-play SD methods is already well recognized in the field.\\n\\nTo further highlight SWIFT\\u2019s efficiency, we provide a detailed breakdown of the training and optimization costs for these methods (refer to *R2-Table2* for additional details):\\n\\n> R3-Table1: Comparison of training and optimization costs for Llama-2-13B\\n\\n| Methods | Eagle | LayerSkip | Self-SD | SWIFT |\\n| ------------------------ | :---------------------------- | :--------------------------------------- | :----------------------------------------------------- | :------------------------------ |\\n| **Training Cost** | 1-2 days with 8 RTX 3090 GPUs | 50k training steps with 64 A100s (80 GB) | 1000 Bayesian Optimization Iterations Before inference | **N/A** |\\n| **Optimization Latency** | - | - | ~7.2 hours | **~2 minutes (200x reduction)** |\", \"detailed_comparisons\": \"- **Compared to training-required methods:** We compare the training and optimization costs of SWIFT with two representative training-required methods -- Eagle [2] and LayerSkip [3], which necessitate a time-intensive fine-tuning process on a large amount of data. In contrast, SWIFT is a *plug-and-play* SD solution that is applicable to most LLMs **without requiring additional training** and offers **immediate usability** for accelerating LLM inference.\\n- **Compared to Self-SD:** Self-SD [4] involves an extensive Bayesian Optimization process before inference, which introduces significant latency (e.g., ~7.5 hours for LLaMA-2-13B, ~20 hours for LLaMA-2-70B). SWIFT introduces an **on-the-fly optimization** strategy, reducing optimization latency by approximately **200x** while maintaining 1.3x\\u20131.6x speedups over vanilla autoregressive decoding.\\n\\nThese comparisons underscore SWIFT\\u2019s superior efficiency in terms of both training and optimization costs.\\n\\n------\\n\\n**The Necessity of Plug-and-Play SD Methods:**\\n\\nAdditionally, we here further discuss the nessisity of plug-and-play methods for Speculative Decoding (SD):\\n\\nWhile training-required methods (e.g., Medusa [1], Eagle [2]) effectively push the boundaries of SD efficiency by incorporating lightweight draft modules and aligning them with target LLMs, they still demand substantial computational resources (e.g., GPU time, datasets) to deliver meaningful acceleration.\\n\\nFor example, Eagle [2], the current SOTA SD method, provides fine-tuned checkpoints for only 11 models across 5 LLM series in its public [repository](https://github.com/SafeAILab/EAGLE). Users have to train new checkpoints on their own if:\\n\\n- Their target LLM is not among the released checkpoints.\\n- The LLM base is updated (e.g., LLaMA-3.x series).\\n\\nIn contrast, **plug-and-play SD methods, such as SWIFT, are model-agnostic and training-free**, offering immediate acceleration without computational overhead. This is particularly valuable for large-scale models (70B\\u2013340B), where retraining/fine-tuning is often infeasible. The widespread adoption of plug-and-play SD methods like Lookahead [5] and PLD [6] (supported in vLLM) highlights the demand for ready-to-use solutions, especially in settings like local LLM inference and online API services.\\n\\n------\\n\\nWe hope these comparisons and insights help clarify SWIFT\\u2019s contributions and practical value as an innovative plug-and-play SD method. We will incorporate these results and discussions into our revised manuscript. Thank you again for your feedback.\\n\\n\\n\\n[1] Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads. Cai et.al. ICML 2024.\\n\\n[2] EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty. Li et.al. ICML 2024.\\n\\n[3] Layer Skip: Enabling Early Exit Inference and Self-Speculative Decoding. Elhoushi et.al. ACL 2024.\\n\\n[4] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. Zhang et.al. ACL 2024.\\n\\n[5] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding. Fu et.al. ICML 2024.\\n\\n[6] Prompt Lookup Decoding. Apoorv Saxena. 2023. [Github Repository](https://github.com/apoorvumang/prompt-lookup-decoding/).\"}",
"{\"title\": \"Response - 4\", \"comment\": \"***Q4: The method requires different settings for different tasks. However, in real-world LLM chat applications, it is often difficult to predict the corresponding tasks of user instructions. It is suggested that the authors evaluate the method's speedup performance on benchmarks like MT-Bench, which test the general capabilities of models.***\", \"a4\": \"We appreciate your inquiry regarding SWIFT's performance across diverse data types. Actually, as we demonstrate in Figure 2 (Section 3.2.1), ***SWIFT can achieve an average 1.2x speedup even without any task-specific optimization*** by using a unified layer skipping pattern. Building on this foundation, SWIFT is designed to dynamically optimize its acceleration performance by adjusting to the characteristics of the current data stream. As discussed in Section 5.2 (Line 462) and further elaborated in *A6 to R2*, ***SWIFT\\u2019s efficiency improves as input length and the number of instances increase***.\\n\\nThis dynamic optimization mechanism makes SWIFT particularly effective in scenarios with large volumes of homogeneous data from specific tasks (e.g., specific test set) \\u2014common in both research and industrial applications. Furthermore, SWIFT accommodates application scenarios where user prompts exhibit *inertia*\\u2014that is, users often ask similar types of questions consecutively. \\n\\nBesides, in real-world LLM applications, user prompts can often be clustered into similar categories. For instance, MT-Bench organizes its data into 8 task types, representing diverse user needs. In such scenarios, a promising enhancement for SWIFT could involve **caching optimal settings** for each task type and dynamically retrieving the corresponding layer configuration to accelerate inference for incoming data. While this remains a promising direction for future exploration, it underscores SWIFT's potential to effectively handle the challenges posed by real-world applications.\\n\\nAdditionally, as shown in **Figure 7**, SWIFT demonstrates **robustness to domain shifts and varying data types**, which contrasts with prior methods like Self-SD [1] that are sensitive to such variations and struggle to handle different data types. This adaptability further highlights SWIFT\\u2019s superiority over existing layer-skipping SD approaches.\\n\\nWe sincerely appreciate your feedback and believe this explanation demonstrates SWIFT\\u2019s strengths and versatility, including its ability to adapt to dynamic input data streams and real-world LLM applications.\\n\\n\\n\\n[1] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. Zhang et.al. ACL 2024.\"}",
"{\"summary\": \"This paper aims to improve speculative decoding (SD) with a focus on eliminating the need for additional model parameters or extensive training to enable effective drafting in SD. In particular, the paper utilizes the same (target) model as the draft model by skipping a subset of model layers while generating draft tokens. Towards this, the paper proposes an SD method, namely SWIFT, that performs on-the-fly adaptive layer selection via an optimization phase to identify task-specific layers to skip. The optimization phase is followed by an inference acceleration phase that leverages the identified layers to perform skipping during drafting. During the inference acceleration phase, SWIFT additionally relies on 1) early stopping of the drafting process if (draft) model's confidence is not high enough; and 2) utilizing top-k predictions for each draft token position during parallel verification. The authors empirically validate the utility of SWIFT by showcasing 1.3-1.6x speed-up on CNN/DM, GSM8K, and TinyStories datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The paper successfully demonstrates that the speculative decoding (SD) framework has the potential to speed up LLM inference even when one does not employ additional model parameters and task-specific training to support the drafting phase.\\n\\n2) The paper makes two key observations about layer skipping during the drafting phase that highlights the need for adaptive (task-specific) selection of layers to skip during the drafting phase to maximize the benefit of layer skipping-based drafting approach. Subsequently, the paper proposes SWIFT - an effective SD approach that can identify a reasonable set of layers to skip for the underlying task with minimal training.\\n\\n3) The paper further showcases the utility of leveraging the (draft) model's prediction confidence and top-k per-token predictions to improve the realized speed-up via SWIFT.\\n\\n4) The paper is mostly well-written and conveys the key ideas in sufficient detail. The proposed ideas exhibit sufficient novelty over existing SD methods. The empirical results and in-depth empirical analysis highlight the gains realized by SWIFT over vanilla LLM inference.\", \"weaknesses\": \"1) There is room for improvement in the discussion of related prior work. Given that Elhoushi et al. 2024 also leverage layer skipping during the drafting phase, a detailed discussion of this work is warranted. Furthermore, the authors may also want to cite https://openreview.net/pdf?id=yUmJ483OB0.\\n\\n2) The authors may want to make their empirical evaluation more comprehensive. Currently, the authors don't compare with existing approaches that rely on layer skipping during the drafting phase. Even though these existing methods might rely on extensive training, the authors should compare SWIFT with these methods. Such a comparison can highlight if there is any performance gap between these methods and their proposed plug-and-play approach.\\n\\n3) The paper aims to eliminate the extensive training of existing layer skipping-based approaches via an efficient on-the-fly optimization phase. However, it's not clear if the existing methods can also perform well even when one limits the amount of offline training for these methods.\\n\\n4) The authors repeatedly emphasize that their proposed method is a plug-and-play method. However, they don't seem to be evaluating their method in a dynamic setting where the underlying task (distribution) changes over time. In such a dynamic setting, would SWIFT have to interleave the optimization and acceleration phases? Would one still observe a good amount of speed up in such settings?\", \"questions\": \"Please see the weaknesses section above. In addition, please consider the following questions:\\n\\n1) Looking at the ablation studies in Appendix D (Table 7), it appears that *dynamic verification* does not bring much value as the loss in overall speed-up is minimal when one excluded dynamic verification (1.560x to 1.541x). Could authors comment on this?\\n\\n2) Do the speedup numbers in Table 2 take into account the optimization phase? If yes, how many LLM generations are performed to obtain the results in Table 2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response - 1\", \"comment\": \"We are grateful for the time Reviewer LGVh have spent reviewing our submission. We appreciate your recognition of the strengths of our proposed SWIFT, particularly its ability to accelerate LLM inference *without introducing additional model parameters or modules for drafting*, thereby ensuring *broad applicability* across various LLMs.\\n\\nThat said, we would like to **address some important misunderstandings** that may have affected the evaluation of our work. Specifically, there appears to be confusion regarding the performance comparison with Lookahead (as detailed in Table 2 of our main results) and the training overhead associated with training-required methods versus plug-and-play approaches. These points are crucial for accurately assessing the significance of our contributions. \\n\\nBelow, we provide detailed clarifications for each of your comments.\\n\\n\\n\\n***Q1: The speedup is not as promising compared to other training-free methods like Lookahead.***\", \"a1\": \"**This appears to be a significant misunderstanding, as we have already provided a detailed comparison with Lookahead [1] in Table 2 (main results) of our manuscript.** As shown in Table 2, SWIFT consistently achieves superior efficiency compared to prior training-free methods, including Lookahead and Parallel Decoding [2]. Specifically:\\n\\n- SWIFT achieves speedups of **1.3x\\u20131.6x** over vanilla autoregressive decoding across various models and tasks.\\n- It delivers **10%\\u201320%** higher efficiency compared to Lookahead Decoding.\\n\\nAdditionally, Appendix D.4 (Tables 9 and 10) presents **detailed token acceptance comparisons**, further underscoring SWIFT's advantages over Lookahead.\\n\\nBeyond performance metrics, as discussed in Section 1 (L49\\u2013L60, Figure 1) and Section 2 (L126\\u2013133), SWIFT introduces **sparsity-based drafting**, a novel and **complementary research direction** for plug-and-play speculative decoding (SD). These directions, summarized and visualized in Figure 1, are as follows:\\n\\n- **Jacobi-based drafting (prior methods):** This approach appends multiple pseudo tokens to the input prompt, allowing the LLM to generate several tokens as drafts in a single step.\\n- **Sparsity-based drafting (ours):** SWIFT leverages the inherent layer sparsity within LLMs to enable efficient drafting by adaptively optimizing the set of skipped layers during inference.\\n\\nThese two approaches are **orthogonal and complementary**, and combining them could amplify the efficiency of both. For instance, SWIFT could incorporate a Lookahead-like mechanism during drafting, which is expected to further enhance both drafting efficiency and token acceptance rates.\\n\\nTo the best of our knowledge, SWIFT is the **first approach to explore plug-and-play SD using sparsity-based drafting**. We hope our findings provide valuable insights and inspire further research in this area.\\n\\n\\n\\n[1] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding. Fu et.al. ICML 2024.\\n\\n[2] Accelerating Transformer Inference for Translation via Parallel Decoding. Santilli et.al. ACL 2023.\"}",
"{\"title\": \"Response - 5\", \"comment\": [\"**To summarize and for further discussion:**\", \"In the responses above, we have provided the **additional experimental results** you suggested (Q1 & Q2), which further illustrate the effectiveness of SWIFT. If you have any **additional comments** regarding the robustness of our approach\\u2014such as suggestions for new experiments or alternative interpretations of results\\u2014**please feel free to share them**. We would greatly appreciate the opportunity to engage further and address any remaining concerns.\", \"We also acknowledge your concerns regarding **the value of plug-and-play SD research** and **the innovations introduced by SWIFT** (Q3 & Q4). **We have provided detailed responses to address these points.** Recognizing the value of a research direction is indeed ***a serious matter***, and we deeply respect your perspective. If you still have reservations about the value of pursuing plug-and-play SD research, we encourage you to share them with us. Your feedback would be invaluable in helping us rethink and refine our future research trajectory.\", \"After addressing your suggested experiments and elaborating on the significance of plug-and-play SD methods, **would you reconsider your current rating?** If you decide not to adjust your rating, we would be grateful if you could clarify whether this decision stems from concerns about unresolved experimental issues or a lack of confidence in the research direction itself.\", \"We look forward to continuing the discussion with you. Thank you once again for the time and effort you have dedicated to reviewing our submission. Your insights are greatly appreciated.\"]}",
"{\"title\": \"Kindly Reminder on the Discussion Period\", \"comment\": \"Dear Reviewer, I hope this message finds you well. As the discussion period is nearing its end with **only two days remaining**, I wanted to ensure we have addressed all your concerns satisfactorily. If there are any additional points or feedback you'd like us to consider, please let us know. Your insights are invaluable to us, and we\\u2019re eager to address any remaining issues to improve our work.\\n\\nThank you for your time and effort in reviewing our paper.\"}",
"{\"metareview\": \"(a) Summary of Scientific Claims and Findings:\\n\\nThis paper introduces SWIFT, a plug-and-play self-speculative decoding (SD) method that accelerates large language model (LLM) inference by dynamically skipping intermediate layers during drafting. Layer-skipping is used as the compact draft model, avoiding additional training or parameters. A Bayesian optimization algorithm adaptively selects task-specific layers to skip during inference. SWIFT achieves speedups on various benchmarks without compromising text quality, which outperforms prior SD methods.\\n\\n(b) Strengths of the Paper:\\n\\nSWIFT does not require auxiliary models or additional training, enabling immediate deployment across various LLMs.\\n\\nThe authors introduce layer sparsity-based SD complements existing SD approaches and offers a practical alternative to training-heavy methods.\\n\\nSWIFT is validated across multiple datasets and LLMs, with comparisons to existing methods like Self-SD and Lookahead.\\n\\n(c) Weaknesses of the Paper and Missing Elements:\\n\\nLimited evaluation against newer training-free methods, e.g., Lookahead, and methods with minimal training overhead, such as Mudusa and Eagle.\\n\\nNeeds more extensive testing in highly dynamic scenarios where task distribution changes rapidly.\\n\\nSome reviewers noted limited speedup for well-optimized LLMs (e.g., Llama-3 series) compared to less efficient models.\\n\\n(d) Decision and Rationale:\\n\\nThe paper has significant contributions, particularly its novel plug-and-play SD method. However, concerns about limited comparisons still remain. Positive reviewer feedback following the discussion phase suggests potential for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The authors effectively addressed the reviewers\\u2019 most concerns, leading to an increase in the reviewers\\u2019 scores.\"}",
"{\"title\": \"Response - 3\", \"comment\": \"***Q3: The paper aims to eliminate the extensive training of existing layer skipping-based approaches via an efficient on-the-fly optimization phase. However, it's not clear if the existing methods can also perform well even when one limits the amount of offline training for these methods.***\", \"a3\": \"Thank you for raising this point. Below, we provide a detailed comparison of optimization performance, focusing on Self-SD [1], an established layer-skipping SD approach, with varying amounts of optimization iterations.\\n\\n> R2-Table3: Experimental Results of Self-SD on LLaMA-2-13B, CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| #Bayesian_Opt | Optimization Time (s) | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| :-----------: | :-------------------: | :--: | :--: | :------: | :-----: |\\n| 0 | 0 | 0.50 | 1.75 | 0.56 | 0.96x |\\n| 10 | 279 | 0.49 | 1.83 | 0.57 | 0.97x |\\n| 50 | 1474 | 0.49 | 1.80 | 0.61 | 1.02x |\\n| 100 | 2898 | 0.45 | 3.04 | 0.80 | 1.19x |\\n| 200 | 5517 | 0.48 | 3.47 | 0.84 | 1.24x |\\n\\nAs shown, Self-SD achieves minimal speedup improvement with fewer than 50 Bayesian optimization iterations (nearly equivalent to unified skipping, i.e., *#Bayesian Opt = 0*). At 100 iterations, Self-SD reaches a 1.19x speedup; however, its optimization latency is nearly **25 times** that of SWIFT (~1 hour).\\n\\nTo compare SWIFT and Self-SD under similar optimization latencies, we conducted the following experiment:\\n\\n> R2-Table4: Experimental Results on LLaMA-2-13B, CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| Methods | #Random_Opt | #Bayesian_Opt | Opt_Time (s) | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| ------------- | :---------: | :-----------: | :----------: | :--: | :--: | :------: | :-----: |\\n| Self-SD | - | 5 | 155 | 0.50 | 1.80 | 0.57 | 0.97x |\\n| Self-SD w/ CA | - | 5 | 155 | 0.50 | 2.07 | 0.86 | 1.17x |\\n| SWIFT | 552 | 23 | **116** | 0.45 | 5.82 | 0.98 | **1.56x** |\\n\\nThese results demonstrate SWIFT\\u2019s superiority over Self-SD in both optimization efficiency and speedup. Below, we analyze the reasons for this advantage (discussed in L168\\u2013L174 of our manuscript):\\n\\n- **Optimization Objective Granularity:** Self-SD calculates its optimization objective at a multi-sample level, requiring sequential decoding of all selected training samples (e.g., 8 samples with 32 tokens each) for every iteration to optimize Equation (1). In contrast, SWIFT adopts a **step-level optimization objective**, optimizing the layer set dynamically at each decoding step.\\n- **Bayesian Optimization Complexity:** The computational complexity of Bayesian optimization increases significantly with the number of iterations. SWIFT mitigates this burden by combining **random search** with **interval Bayesian optimization**, accelerating convergence of the optimization process while reducing computational overhead.\\n\\nTo further explore optimization trade-offs, we reduce Self-SD\\u2019s sequential optimization demand to 1 sample with 8 tokens, allowing for more Bayesian optimization iterations under similar latency. The results are summarized below:\\n\\n> R2-Table5: Experimental Results on LLaMA-2-13B, CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| Methods | #Random_Opt | #Bayesian_Opt | Opt_Time (s) | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| ------------- | :---------: | :-----------: | :----------: | :--: | :--: | :------: | :-----: |\\n| Self-SD | - | 30 | 199 | 0.45 | 2.08 | 0.70 | 1.04x |\\n| Self-SD w/ CA | - | 30 | 199 | 0.45 | 2.44 | 0.93 | 1.22x |\\n| SWIFT | 552 | 23 | **116** | 0.45 | 5.82 | 0.98 | **1.56x** |\\n\\nEven with optimized settings, SWIFT achieves significantly better speedup and efficiency compared to Self-SD, demonstrating the superiority of our proposed strategies.\\n\\nWe appreciate your insightful question and will incorporate these results and discussions into the revised manuscript.\\n\\n\\n\\n[1] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. Zhang et.al. ACL 2024.\"}",
"{\"comment\": \"Thank you for your detailed and thoughtful responses. I appreciate the effort to address each of my comments thoroughly and to integrate discussions of related work and clarifications on SWIFT's design. The added context and analysis significantly enhance the manuscript and align it well with the ICLR bar. I look forward to seeing the revised version incorporating the suggested points. I am confident in the value and contributions of this work and will revise my score to 8.\"}",
"{\"title\": \"Thank you for the detailed response\", \"comment\": \"I thank the authors for their comprehensive response. Most of my questions and concerns are resolved. Adding new results (A2, A3) will further strengthen the submission. I have a couple of remaining questions:\\n\\n1. In A3, why did the authors not perform a comparison with *LayerSkip with a limited training budget*? \\n2. Do the authors expect to continue to observe good speedup in a dynamic setting when the change in data distribution is faster than what is considered in Figure 7? In real systems serving mixed traffic, is it common to have a large number of requests (~500) from a single task appear together?\"}",
"{\"title\": \"Follow-Up: Seeking Further Feedback\", \"comment\": \"Dear Reviewer, I hope you're doing well. Following up on our recent exchange regarding this paper, I wanted to check if there are any further concerns or feedback from your side. Your insights are invaluable to us, and we're keen to address any remaining issues.\"}",
"{\"summary\": \"This paper aims to accelerate the inference of LLMs. They introduce SWIFT, a self-speculative decoding algorithm that adaptively selects intermediate layer to skip without extra cost. They performed an empirical analysis of layer-skipping SD paradigm and show the potential of self-accelerate of LLMs through layer sparsity. They used some techniques like early-stop drafting to further speed up reasoning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The paper is well-written and flows very smoothly.\\n2.The authors make effort to demonstrate the feasibility of their theory through experiments.\\n3.The method incorporates many of the latest techniques.\", \"weaknesses\": \"1.The author should compare their method with self-SD [1] in table 2, since their method is an improvement of the latter.\\n2. The author only compared to the baseline on the Llama and CodeLlama models. I believe experiments should be conducted on larger models with different architectures to demonstrate the generalization of the method.\\n3.Moreover, compared with self-SD, the innovation is still insufficient, for example , the confidence-aware inference strategies are similar to some mechanism in [1],[2]\\n4.Despite SWIFIT does not require additional training, comparing with other method ,like EAGLE [2] ,Medusa [3], which can achieve over a 3.05-4.26x speedup, SWIFIT does\\u2019t show much value.As reported in [2], the draft models is trainable within 1-2 days for 70B models.\\n\\n[1] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. ACL 2024\\n[2] EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees, EMNLP 2024\\n[3] Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads, ICML 2024\", \"questions\": \"1.The author should compare their method with self-SD [1] in table 2, since their method is an improvement of the latter.\\n2. The author only compared to the baseline on the Llama and CodeLlama models. I believe experiments should be conducted on larger models with different architectures to demonstrate the generalization of the method.\\n3.Moreover, compared with self-SD, the innovation is still insufficient, for example , the confidence-aware inference strategies are similar to some mechanism in [1],[2]\\n4.Despite SWIFIT does not require additional training, comparing with other method ,like EAGLE [2] ,Medusa [3], which can achieve over a 3.05-4.26x speedup, SWIFIT does\\u2019t show much value.As reported in [2], the draft models is trainable within 1-2 days for 70B models.\\n\\n[1] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. ACL 2024\\n[2] EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees, EMNLP 2024\\n[3] Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads, ICML 2024\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Some of my concerns have been addressed, but I will keep my score for the following reasons:\\n\\n1. The authors claim that the proposed SWIFT can only achieve a 1.2x speedup without task-specific optimization, which is a common setting for real-world LLM chat applications. On average, Lookahead also achieves a 1.2x speedup without any task-specific optimization. This indicates that, under the common setting, SWIFT can not outperform Lookahead.\\n\\n2. I am still wondering about SWIFT's performance on benchmarks, such as MT-Bench, that evaluate general capabilities.\"}",
"{\"title\": \"Raised score\", \"comment\": \"Thank you for providing additional experiments in response to my questions. I found the response adequate and have increased my score to 6.\"}",
"{\"title\": \"Response - 2\", \"comment\": \"***Q2: Currently, the authors don't compare with existing approaches that rely on layer skipping during the drafting phase. Even though these existing methods might rely on extensive training, the authors should compare SWIFT with these methods. Such a comparison can highlight if there is any performance gap between these methods and their proposed plug-and-play approach.***\", \"a2\": \"Thanks for this advice! We provide a comparison of SWIFT with LayerSkip [1] and Self-SD [2] below, which are the two most representative layer-skipping SD methods. We report the skip ratio ($r$), mean accepted tokens (*M*), and token acceptance rate ($\\\\alpha$) for comparison. The relationship among these three metrics and the expected wall-clock speedup is illustrated in Equation (6) of Appendix B.3.\\n\\n> R2-Table1: Experimental Results on LLaMA-2-13B, CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| Methods | Plug-and-Play | Original Dist | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| --------------- | :-----------: | :-----------: | :--: | :--: | :------: | :-------: |\\n| LayerSkip | **No** | **No** | 0.80 | 2.42 | 0.64 | 1.64x |\\n| Self-SD | No | Yes | 0.43 | 4.02 | 0.85 | 1.29x |\\n| Self-SD *w/ CA* | No | Yes | 0.43 | 5.69 | 0.98 | 1.52x |\\n| SWIFT | Yes | Yes | 0.45 | 5.82 | 0.98 | **1.56x** |\\n\\n> *CA* refers to the Confidence-aware inference Acceleration strategy in Section 4.2. 'Original Dist' indicates whether the original distribution of the target LLM is altered. \\n>\\n> Note: We re-implemented LayerSkip using the huggingface version, which does not support KV cache reuse. Integrating KV cache reuse would likely improve LayerSkip's speedup to approximately 1.8x, as reported in its original paper.\", \"we_compare_swift_with_each_layer_skipping_sd_method_below\": \"- **Comparison with LayerSkip:** LayerSkip\\u2019s pretraining/finetuning process enables a more aggressive skip ratio ($r=0.8$), resulting in an average score of $M=2.42$ and $\\\\alpha=0.64$. However, as noted in **R1**, this process modifies the ***original distribution*** of the target LLM, potentially reducing the reliability of its outputs. In contrast, SWIFT preserves the original distribution of the target LLM, while achieving a promising 1.56x speedup. \\n- **Comparison with Self-SD:** Self-SD necessitates a time-intensive Bayesian Optimization process before inference (~7.5 hours for LLaMA-2-13B and ~20 hours for LLaMA-2-70B). In contrast, SWIFT introduces an on-the-fly optimization strategy, resulting in an approximate **200X** reduction in optimization latency while maintaining a 1.56x speedup. We further augmented Self-SD with our *Confidence-aware inference Acceleration strategy* (Self-SD *w/ CA*). Even compared to this augmented version, SWIFT achieves competitive speedups.\\n\\nTo further illustrate SWIFT\\u2019s efficiency, we present a breakdown of the training and optimization costs for these methods:\\n\\n> R2-Table2: Comparison of training and optimization costs for Llama-2-13B\\n\\n| Methods | LayerSkip | Self-SD | SWIFT |\\n| -------------------- | :-------------------------------------- | :----------------------------------------------------- | :------------------------------------------ |\\n| Training Cost | 50k training steps with 64 A100 (80 GB) | 1000 Bayesian Optimization Iterations Before inference | N/A |\\n| Optimization Latency | - | ~7.2 hours | ~2 minutes (**200X** reduction$\\\\downarrow$) |\\n\\nThese comparisons clearly highlight SWIFT\\u2019s efficiency in both performance and training/optimization costs. We will incorporate these results and discussions into our revised manuscript. We sincerely appreciate your suggestion, which has helped us strengthen the comparative analysis and better illustrate SWIFT\\u2019s advantages.\\n\\n\\n\\n[1] Layer Skip: Enabling Early Exit Inference and Self-Speculative Decoding. Elhoushi et.al. ACL 2024.\\n\\n[2] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. Zhang et.al. ACL 2024.\"}",
"{\"title\": \"Response to Reviewer tWD9\", \"comment\": \"Thanks for your response. We are glad to hear that most of your questions and concerns have been addressed. And we appreciate your aknowledgement that the addtional results (A2, A3) further strengthen our submission. Below, we provide further clarifications to address your follow-up concerns.\\n\\n***Q7: In A3, why did the authors not perform a comparison with LayerSkip with a limited training budget?***\", \"a7\": \"Thank you for your inquiry. As we mentioned in Q2, training LayerSkip on LLaMA-2-13B requires **50k training steps with 64 A100s (80 GB)**, which involves significant computational resources. Given this large demand, we did not anticipate LayerSkip achieving an effective speedup with a comparative optimization latency on par with SWIFT (within 2 minutes). We deeply value the contributions of LayerSkip and the insights it provides to layer-skipping SD research as a training-required method. We note that our proposed SWIFT complements their efforts by investigating plug-and-play SD with layer skipping.\\n\\n\\n\\n***Q8: Do the authors expect to continue to observe good speedup in a dynamic setting when the change in data distribution is faster than what is considered in Figure 7? In real systems serving mixed traffic, is it common to have a large number of requests (~500) from a single task appear together?***\", \"a8\": \"Thanks for your further inquiry of SWIFT's effectiveness in real systems serving mixed traffic. This is a quite good point! To address this, we conducted additional experiments using MT-Bench [1], a widely adopted multi-turn benchmark with 8 subtasks (10 instances each). To simulate mixed traffic, we randomized the instance order in MT-Bench, which reflects a more dynamic real-world setting. The results are shown below:\\n\\n> R2-2-Table1: Experimental Results on Vicuna-v1.3 (Greedy Decoding, FP16 Precision)\\n\\n| Methods | Vicuna-7B | Vicuna-13B | Vicuna-33B |\\n| :-------- | :-------- | :--------- | :--------- |\\n| Lookahead | 1.13x | 1.14x | 1.13x |\\n| SWIFT | **1.20x** | **1.27x** | **1.35x** |\\n\\nIn this experimental setting, SWIFT continues to demonstrate **superior efficiency** over Lookahead across all model sizes. Notably, as the model size increases, SWIFT\\u2019s overall speedup **improves consistently**, aligning with the trends observed in Figure 8 of our main paper.\\n\\nWe also note that in LLM serving scenarios, most user instructions can be categorized into similar types based on their intent, such as reasoning, writing, coding, QA, etc. (e.g., the 8 subtasks in MT-Bench). Therefore, a potential application of SWIFT in these scenarios could involve **caching optimized layer configurations for similar data types** and retrieving them when processing corresponding instances. This approach would likely **further enhance** SWIFT\\u2019s efficiency beyond the results shown here.\\n\\nFor further context on this point, we refer to a detailed discussion in our last response to Reviewer LGVh (R3).\\n\\n[1] Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. Zheng et al. NeurIPS 2023 Datasets and Benchmarks.\\n\\n------\\n\\nWe hope the additional experiments and clarifications comprehensively address your concerns. We deeply appreciate your thoughtful inquiry into SWIFT's effectiveness in real-world mixed traffic scenarios. These discussions will be incorporated into the final manuscript, and we remain open to any further feedback. Please feel free to reach out with any additional questions.\\n\\nOnce again, we sincerely thank you for your thoughtful suggestions and detailed feedback, which helps us further strengthen our manuscript.\"}",
"{\"title\": \"Response - 5\", \"comment\": \"***Q5: Table 2 presents results for Llama-2-13B, Llama-2-13B-Chat, and Llama-2-70B. Why are the results for Llama-2-70B-Chat and Llama-2-7B(-Chat) not included?***\", \"a5\": \"Thank you for your inquiry regarding additional experimental results. In response, we now provide results for **LLaMA-2-70B-Chat** and **LLaMA-3-70B-Instruct**, complementing the previously presented results for LLaMA-2-13B, LLaMA-2-13B-Chat, and LLaMA-2-70B, as well as those provided in *R3-Table1*.\\n\\n> R3-Table2: Experimental Results on CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| Models | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| -------------------- | :--: | :--: | :------: | :-----: |\\n| LLaMA-2-70B-Chat | 0.5 | 3.43 | 0.85 | 1.31x |\\n| LLaMA-3-70B-Instruct | 0.4 | 3.76 | 0.95 | 1.33x |\\n\\nThese additional results further validate SWIFT\\u2019s effectiveness across a broader range of LLaMA models, including both chat-tuned and instruction-tuned variants.\\n\\nWe appreciate your suggestion and will incorporate these additional comparisons and discussions into our revised manuscript.\\n\\n\\n\\n**To summarize:**\\n\\nIn the discussion above, we have addressed key misunderstandings (**A1, A2**), provided additional experimental results for the LLaMA-3 series and LLaMA-2-70B-Chat to further substantiate our claims (**A3, A5**), and offered a detailed response to your inquiry regarding SWIFT's performance across diverse data types (**A4**). \\n\\nUpon reviewing the overall comments, we observe that **there are no direct challenges to the core idea of our work**. Specifically, we have clarified the overlooked comparisons with Lookahead and highlighted the substantial training costs associated with training-required methods like Eagle. **These clarifications further strengthen the motivation behind SWIFT and validate its effectiveness.**\\n\\n**Given that the weaknesses raised by the reviewer are largely based on misunderstandings, we respectfully hope that you will engage in a thorough discussion of our clarifications.** Additionally, we kindly request that you reconsider your rating in light of the responses and evidence we have provided. If you have any further questions or require additional clarification, please feel free to let us know.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your response, which has helped me better understand the strengths of this paper. I will increase my score.\"}",
"{\"title\": \"Follow-Up: Seeking Further Feedback\", \"comment\": \"Dear Reviewer, I hope you're doing well. Following up on our recent exchange regarding this paper, I wanted to check if there are any further concerns or feedback from your side. Your insights are invaluable to us, and we're keen to address any remaining issues.\"}",
"{\"summary\": \"By adaptively skipping intermediate layers during inference, SWIFT improves speedups of LLMs without compromising the quality of generation. The method integrates a Bayesian optimization-based layer selection mechanism to adapt to task-specific requirements dynamically.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"While the concept of layer-skipping is not really novel, its use of bayesian optimization can be a good idea for self-SD.\", \"weaknesses\": [\"1. The reward design and its stability under distributional changes need more explanation. Open discussion with concurrent work, such as \\\"A Unified Framework for Speculative Decoding with Multiple Drafters as a Bandit (Submitted at ICLR'25; https://openreview.net/forum?id=5haYLrlyGj)\\\", could enhance understanding of these challenges. While the primary focus is different, the insight of using bandit approach is quite similar to this paper. And I recommend the authors to put the discussions for the assumption and extensions of Bayesian optimization for layer skipping inspired by this work.\", \"2. More discussions on related work, such as Kim et al. (2024), Stern et al. (2018), and Gloeckle et al. (2024) on pretrained blockwise parallel language models, would position the contribution better within the existing literature. Because both papers are also a parallel line of work for self-speculative decoding, while they use the non-autoregressive heads instead.\", \"Gloeckle et al. (2024), Better & Faster Large Language Models via Multi-token Prediction.\", \"Stern et al. (2024), Blockwise Parallel Decoding for Deep Autoregressive Models\", \"Kim et al. (2024), Accelerating Blockwise Parallel Language Models with Draft Refinement. (https://openreview.net/forum?id=KT6F5Sw0eg)\"], \"questions\": \"1. How does SWIFT handle non-stationary input distributions during Bayesian optimization?\\n\\n2. Could the authors provide insights into how SWIFT performs under extreme token count variations or highly domain-specific tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Kindly Reminder on the Discussion Period\", \"comment\": \"Dear Reviewer, I hope this message finds you well. As the discussion period is nearing its end with **only two days remaining**, I wanted to ensure we have addressed all your concerns satisfactorily. If there are any additional points or feedback you'd like us to consider, please let us know. Your insights are invaluable to us, and we\\u2019re eager to address any remaining issues to improve our work.\\n\\nThank you for your time and effort in reviewing our paper.\"}",
"{\"comment\": \"Thanks a lot. I have no more concerns and I raise my score to 5.\"}",
"{\"summary\": \"This paper proposes a plug-and-play self-speculative decoding method. The authors employ a layer-skipping approach to construct a draft model. Experimental results indicate that this method achieves a 1.3-1.6 times inference speedup on Llama-2 and Code-Llama models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The method does not require training an additional model or module for drafting, making it applicable to most large language models.\", \"weaknesses\": \"1. The speedup is not as promising compared to other training-free methods like Lookahead. The authors should also present results for methods such as Mudusa and Eagle, which require minimal training overhead.\\n2. It is recommended that the authors test well-trained LLMs, such as Llama-3, as models with less effective performance might yield higher speedup ratios.\\n3. The method requires different settings for different tasks. However, in real-world LLM chat applications, it is often difficult to predict the corresponding tasks of user instructions. It is suggested that the authors evaluate the method's speedup performance on benchmarks like MT-Bench, which test the general capabilities of models.\", \"questions\": \"1. Table 2 presents results for Llama-2-13B, Llama-2-13B-Chat, and Llama-2-70B. Why are the results for Llama-2-70B-Chat and Llama-2-7B(-Chat) not included?\\n2. How does the overhead of the proposed layer searching algorithm compare to the overhead of training additional modules like Eagle?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response - 3\", \"comment\": \"***Q3: It is recommended that the authors test well-trained LLMs, such as Llama-3, as models with less effective performance might yield higher speedup ratios.***\", \"a3\": \"In our manuscript, we evaluated the LLaMA-2 series following the experimental settings of Lookahead [1] and Self-SD [2], ensuring **fair comparisons with prior work**. To address your concern, we conducted additional experiments comparing SWIFT\\u2019s speedup performance on the LLaMA-2 and LLaMA-3 series, thereby showcasing **the robustness of SWIFT** regardless of the model\\u2019s overall effectiveness.\\n\\nIn addition to reporting the overall speedup, we provide key metrics including the skip ratio ($r$), mean accepted tokens (*M*), and token acceptance rate ($\\\\alpha$) for comparison. The relationship among these metrics and the expected wall-clock speedup is explained in Equation (6) of Appendix B.3.\\n\\n> R3-Table1: Experimental Results on CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| Models | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| ----------- | :--: | :--: | :------: | :-----: |\\n| LLaMA-2-7B | 0.40 | 3.45 | 0.94 | 1.24x |\\n| LLaMA-3-8B | 0.40 | 3.80 | 0.93 | 1.25x |\\n| LLaMA-2-70B | 0.50 | 3.85 | 0.99 | 1.43x |\\n| LLaMA-3-70B | 0.40 | 5.43 | 0.99 | 1.41x |\\n\\n> During the optimization phase, the layer skip ratio ($r$) for LLaMA-3-70B was automatically adjusted from 0.5 to 0.4, as the token acceptance rate ($\\\\alpha$) remained below the predefined tolerance threshold (e.g., 0.7). The adjusted ratio is reflected in the table above.\\n\\nThese results demonstrate that SWIFT consistently achieves significant speedups (**1.2x\\u20131.4x**) across both LLaMA-2 and LLaMA-3 series, effectively addressing the assumption that *\\\"models with less effective performance might yield higher speedup ratios.\\\"* Although differences in layer redundancy are observed between models (e.g., $r$ values for LLaMA-2-70B vs. LLaMA-3-70B), SWIFT remains robust and adaptable, maintaining high acceleration performance irrespective of the model\\u2019s effectiveness.\\n\\nWe sincerely appreciate your suggestion, as it allowed us to strengthen the comparative analysis of SWIFT. These additional results and discussions will be incorporated into our revised manuscript.\\n\\n[1] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding. Fu et.al. ICML 2024.\\n\\n[2] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. Zhang et.al. ACL 2024.\"}",
"{\"title\": \"General Response\", \"comment\": \"For clarity and simplicity, we will refer to Reviewers rnoa, tWD9, LGVh, and vnfL as R1, R2, R3, and R4, respectively, in the following response.\\n\\nWe sincerely thank all reviewers for their thoughtful and constructive feedback. We are encouraged by their recognition of the key contributions and strengths of our work. \\n\\nIn particular, we appreciate the acknowledgment of our empirical studies for making *crucial* observations on layer-skipping SD (**R2**), finding our methodology *insightful* for advancing SD research (**R1**), and appreciating our efforts to *successfully* demonstrate the great potential of LLMs for self-acceleration *without additional model parameters or task-specific training* (**R2, R3, R4**). Furthermore, we are pleased that our plug-and-play SWIFT method is viewed as *widely applicable* to most large language models (**R3**), that our approach is recognized for its *novelty* compared to existing SD methods (**R2**), and that the efficiency *superiority* of SWIFT over vanilla LLM inference is well acknowledged (**R2**). We also appreciate the reviewers\\u2019 comments noting that our paper is *generally well-written* (**R2, R4**), flows *smoothly* (**R4**), and conveys key ideas in *sufficient detail* (**R2**).\\n\\nWe have carefully addressed each individual comment provided by the reviewers and believe we have successfully responded to most of their concerns. In our revised manuscript, we have incorporated the suggested experiments, additional discussions, and relevant updates to further strengthen our work. Below, we summarize the core contributions of our study, the updates to our experiments, and the in-depth discussions included in our revision.\\n\\n------\\n\\n**Core Contributions of Our Work**\\n\\n1. **Empirical Investigation**: We conducted an in-depth empirical analysis of LLM acceleration via layer sparsity, revealing the potential for LLM self-acceleration via layer sparsity and its task-specific nature, underscoring the necessity for adaptive self-speculative decoding during inference.\\n2. **Novel Framework**: we propose SWIFT, the first plug-and-play self-speculative decoding algorithm that dynamically optimizes the selection of skipped layers in the target LLM on the fly, enabling lossless acceleration of LLM inference across diverse input data streams.\\n3. **Complementary Efforts:** SWIFT represents a complementary research direction to existing plug-and-play SD methods. Its layer-skipping approach is orthogonal to Jacobi-based techniques like Lookahead Decoding, and combining the two could further amplify their collective efficiency.\\n4. **Experimental Results**: Through extensive experimentation across various models and tasks, we demonstrate that SWIFT consistently achieves a 1.3x-1.6x speedup without relying on auxiliary models or additional training, while theoretically guaranteeing the preservation of the generated text\\u2019s distribution.\\n\\n------\\n\\n**Updates of experimental results during Rebuttal**\\n\\n- **`Appendix C.2`**: Added experimental results using LLaMA-2-70B-Chat and LLaMA-3-70B-series models, including both base LLMs and instruction-tuned variants.\\n- **`Appendix D.3`**: Detailed comparison with prior layer-skipping methods (e.g., LayerSkip[1] and Self-SD[2]), focusing on wall-clock speedups, training costs, and optimization latency.\\n- **`Appendix D.4`**: Analyzed the optimization burden of Self-SD[2] and compared its performance with SWIFT under similar optimization latency.\\n- **`Appendix D.1`**: Corrected the ablation study results.\\n\\n**Updates of in-depth discussions during Rebuttal**\\n\\n- **`Appendix D.5`**: Discussed the necessity and importance of plug-and-play SD methods for LLM acceleration.\\n- **`Appendix D.6`**: Elaborated on related work, including SD methods with early exiting and their distinctions from SWIFT.\\n\\n------\\n\\nWe believe these additions and clarifications comprehensively address the reviewers' concerns and enhance the overall quality of our manuscript. All revisions are highlighted in `magenta-colored` text for ease of reference. Our manuscript is updated on `Nov 23, AOE time`. \\n\\nWe look forward to the reviewers' favorable consideration and remain grateful for their valuable feedback.\\n\\n[1] Layer Skip: Enabling Early Exit Inference and Self-Speculative Decoding. Elhoushi et.al. ACL 2024.\\n\\n[2] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. Zhang et.al. ACL 2024.\"}",
"{\"title\": \"Response - 4\", \"comment\": \"***Q4: Despite SWIFT does not require additional training, comparing with other method, like EAGLE[2], Medusa[1], which can achieve over a 3.05-4.26x speedup, SWIFIT does\\u2019t show much value. As reported in EAGLE[2], the draft models is trainable within 1-2 days for 70B models.***\", \"a4\": \"Thank you for raising this important question regarding the value of plug-and-play methods like SWIFT compared to training-intensive approaches such as Medusa [1] and EAGLE [2]. Below, we address your concern in detail:\\n\\n------\\n\\n**(1) The computational overhead of training-required methods is unacceptable sometimes.**\\n\\nTraining-required methods such as Medusa [1] and EAGLE [2], while achieving higher speedups, still incur **substantial training costs**. Despite efforts to reduce training overhead, these methods require extensive computational resources (e.g., GPU time and datasets) to deliver valid acceleration performance. For example: EAGLE requires **1\\u20132 days of training with 8 RTX 3090 GPUs** for LLaMA-33B or **up to 2 days on 4 A100 (40G) GPUs** for LLaMA2-Chat-70B, utilizing a dataset of **70k dialogues** from ShareGPT.\", \"these_computational_burdens_introduce_challenges_in_several_scenarios\": \"- **Users must train new draft models for unsupported target LLMs.** If the user's target LLM is not among EAGLE's released checkpoints or if the base model is updated (e.g., LLaMA-3.x), users are forced to train a new draft model, which may exceed their available GPU resources (e.g., GPU time).\\n- **Users with small-scale acceleration needs face inefficiencies.** For instance, a researcher needing to evaluate a small set of samples (e.g., 10 hours of evaluation) would find the 1\\u20132 day training requirement for EAGLE disproportionate and harmful to overall research efficiency.\\n\\n------\\n\\n**(2) High speedups in training-required methods do not negate the value of plug-and-play SD research.**\\n\\nPlug-and-play SD methods, including SWIFT, are **model-agnostic and training-free**, providing **immediate acceleration without requiring additional computational overhead**. These attributes are particularly critical for large models (70B\\u2013340B) and specific use cases, as discussed above.\\n\\nAdditionally, the increasing adoption of plug-and-play SD methods such as Lookahead [3] and PLD [4] (supported in [vLLM](https://github.com/vllm-project/vllm)) highlights the demand for ready-to-use solutions. This further **validates the research value of plug-and-play SD methods**, which cater to scenarios where computational efficiency and ease of integration are paramount.\\n\\n------\\n\\n**(3) SWIFT pioneers plug-and-play SD with layer-skipping drafting, achieving state-of-the-art performance.**\\n\\nAs detailed in A2, SWIFT represents the **first plug-and-play SD method** to incorporate **layer-skipping drafting**. It consistently achieves **1.3x\\u20131.6x speedups** over vanilla autoregressive decoding across diverse models and tasks. Additionally, it demonstrates **10%\\u201320% higher efficiency** compared to Lookahead Decoding [3].\\n\\nBeyond its promising experimental results, SWIFT introduces a **novel and complementary research direction** for plug-and-play SD methods. Its approach is **orthogonal** to Lookahead Decoding, and combining the two could further amplify their collective efficiency. We believe this study provides valuable insights and paves the way for future advancements in the SD community, particularly for practical and cost-effective LLM acceleration.\\n\\n\\n\\n[1] Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads. Cai et.al. ICML 2024.\\n\\n[2] EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty. Li et.al. ICML 2024.\\n\\n[3] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding. Fu et.al. ICML 2024.\\n\\n[4] Prompt Lookup Decoding. Apoorv Saxena. 2023. [Github Repository](https://github.com/apoorvumang/prompt-lookup-decoding/).\"}",
"{\"title\": \"Response - 3 (2)\", \"comment\": \"**(3) Our proposed SWIFT obtains a promising 1.3x-1.6x speedup as a plug-and-play SD method, competitive with Self-SD which requires substantial optimization demands.**\\n\\n> R4-Table5: Experimental Results on LLaMA-2-13B, CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| Methods | #Bayesian_Opt | Optimization Latency | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| --------------- | :-----------: | :------------------: | :--: | :--: | :------: | :-------: |\\n| Self-SD | 1000 | ~7.2 hours | 0.43 | 4.02 | 0.85 | 1.29x |\\n| Self-SD *w/ CA* | 1000 | ~7.2 hours | 0.43 | 5.69 | 0.98 | 1.52x |\\n| Self-SD | 5 | ~2.5 minutes | 0.50 | 1.80 | 0.57 | 0.97x |\\n| Self-SD w/ CA | 5 | ~2.5 minutes | 0.50 | 2.07 | 0.86 | 1.17x |\\n| SWIFT | | **~2 minutes** | 0.45 | 5.82 | 0.98 | **1.56x** |\\n\\n> *CA* refers to our proposed Confidence-aware inference Acceleration strategy in Section 4.2.\\n\\nAs shown in table 5, Self-SD requires a computationally expensive Bayesian Optimization process before inference (~7.5 hours for LLaMA-2-13B and ~20 hours for LLaMA-2-70B), which makes it unsuitable for plug-and-play applications. In contrast, SWIFT\\u2019s **on-the-fly optimization strategy** achieves an approximate **200x reduction in optimization latency**, with an impressive **1.56x speedup** in inference performance. \\n\\nTo validate our approach further, we augmented Self-SD with the *Confidence-aware Inference Acceleration Strategy* (*Self-SD w/ CA*). Even with this enhancement, SWIFT demonstrates competitive or superior performance, achieving higher speedups while maintaining minimal latency overhead.\\n\\n------\\n\\n**(4) As the first plug-and-play SD method with layer-skipping drafting, we hope SWIFT provides valuable insights and inspires further research in this area.**\\n\\nSpeculative Decoding (SD) has recently garnered significant interest from both academia and industry as an effective LLM inference acceleration strategy that preserves the original LLM's output distribution. It has been widely adopted in LLM inference applications, such as [vLLM](https://github.com/vllm-project/vllm). However, recent SD research appears to have reached a plateau, focusing mainly on incremental improvements or revisiting prior methods without fresh, innovative explorations in this field.\\n\\nIn this work, we present **the first exploration of plug-and-play SD methods with layer-skipping drafting**. SWIFT introduces an **orthogonal** approach to Lookahead Decoding [2], showcasing **promising adaptability** across diverse LLMs and dynamic data streams. Unlike most existing SD methods, SWIFT operates ***without the need for auxiliary models or additional training***, making it both **cost-effective** and **practical** for real-world applications. We believe this study not only paves the way for new research directions within the community but also provides substantial value for low-cost deployment.\\n\\n------\\n\\n**To sum up:**\\n\\nIn contrast to Self-SD, which incurs substantial optimization latency, we propose SWIFT \\u2014 **the first plug-and-play layer-skipping SD method capable of dynamically optimizing the skipped layer set on the fly**. The efficiency supriority of SWIFT is established upon two key innovations regarding *optimization objective granularity* and *Bayesian optimization efficiency*. These advancements allow SWIFT to perform layer set optimization within user-defined acceleration tolerances, resulting in a remarkable **200x reduction in optimization latency** compared to Self-SD. We believe this study opens new avenues for research in the community while offering substantial value for practical and low-cost deployment.\\n\\n\\n\\n[1] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. Zhang et.al. ACL 2024.\\n\\n[2] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding. Fu et.al. ICML 2024.\"}",
"{\"title\": \"Response - 1\", \"comment\": \"We sincerely thank Reviewer rnoa for your thoughtful review and are delighted that you recognize the value of our application of Bayesian Optimization in advancing plug-and-play SD research. Below, we provide detailed responses to each of your comments.\\n\\n\\n\\n***Q1: Open discussion with concurrent work [1] could enhance understanding of these challenges. While the primary focus is different, the insight of using bandit approach is quite similar to this paper. And More discussions on related work [2,3,4]. These papers are also a parallel line of work for self-speculative decoding, while they use the non-autoregressive heads instead.***\", \"a1\": \"Thank you for your insightful suggestions and for pointing out relevant concurrent work. We appreciate the opportunity to discuss these connections.\\n\\n[1] introduces MetaSD, an advanced Speculative Decoding (SD) method that integrates multiple specialized drafters into the target LLM and employs a multi-armed bandit sampling strategy to dynamically select the optimal drafter during inference. While the optimization target and implementation strategy in [1] differ from our work, both approaches share a common goal of dynamically optimizing the drafter configuration for SD, showcasing a promising direction for advancing SD research. Specifically, [1] employs a multi-armed bandit mechanism to switch between a fixed number of $k$ specialized drafters fine-tuned during the training stage. In contrast, **SWIFT\\u2019s search space is significantly larger**, as it involves determining the layer-skipping index for a given skip ratio, which has a combinatorial complexity of $\\\\binom{L}{rL}$ as noted in L299-300. This challenge necessitated our use of Bayesian Optimization. We are keen to explore additional optimization strategies, including the bandit mechanism, in future work.\\n\\nWe also appreciate your recommendation to discuss [2, 3, 4], which represent an exciting parallel line of work focusing on *non-autoregressive drafting* strategies. These methods integrate multiple draft heads into the target LLM, enabling parallel generation of draft tokens at each decoding step. Notably, [4] builds on the BPD paradigm introduced in [2], accelerating inference by refining block drafts with task-independent n-grams and lightweight rescorers using smaller LMs. While these approaches require extensive training of draft models, **SWIFT complements their efforts by exploring a plug-and-play SD paradigm** that does not rely on auxiliary models or additional training, offering a more flexible and practical solution.\\n\\nWe will incorporate discussions of [1, 2, 3, 4] into our revised manuscript to provide a broader context for our work and highlight its position within the current landscape of SD research.\\n\\n\\n\\n[1] A Unified Framework for Speculative Decoding with Multiple Drafters as a Bandit.\\n\\n[2] Blockwise Parallel Decoding for Deep Autoregressive Models. Stern et al. NIPS 2018.\\n\\n[3] Better & Faster Large Language Models via Multi-token Prediction. Gloeckle et al. ICML 2024.\\n\\n[4] Accelerating Blockwise Parallel Language Models with Draft Refinement. Kim et al. NIPS 2024.\"}",
"{\"comment\": \"Thank you for your detailed response. In Figure 6, you demonstrate that SWIFT can dynamically optimize layer skip configurations. However, this seems to rely on an assumption that consecutive instructions belong to the same task type. In real-world application, with a massive number of user instructions, the diversity is much higher, and this assumption often does not hold. Therefore, I\\u2019m curious: if instances in MT-Bench are processed in a random order, what would the speedup ratio be? Additionally, I would still like to see a comparison of speedup between Lookahead and SWIFT on MT-Bench using Llama-3 as the backbone.\"}",
"{\"title\": \"Response - 5\", \"comment\": \"***Q6: Do the speedup numbers in Table 2 take into account the optimization phase? If yes, how many LLM generations are performed to obtain the results in Table 2?***\", \"a6\": \"Yes, the speedup numbers in Table 2 reflect the overall wall-clock speedup, incorporating the latencies of both the optimization and acceleration phases for all evaluated methods. As described in Section 5.1, we randomly sampled 1,000 instances from the test set for each dataset, following the setup in Self-SD [1]. The maximum generation lengths for CNN/DM, GSM8K, and TinyStories were set to 64, 64, and 128 tokens, respectively.\\n\\nTo further illustrate SWIFT\\u2019s efficiency, we provide a detailed prefilling analysis of its separate modules in Figure 6. This analysis shows that the optimization phase contributes minimally to the total inference latency, occupying just $\\\\textbf{0.8}$\\\\% of the total runtime. Specifically, the optimization phase concludes early in the process (by instance index 10), with the draft model achieving a satisfactory token acceptance rate of 0.98. Subsequently, SWIFT transitions to the acceleration phase.\\n\\nWe report two key metrics in Figure 6 to clarify SWIFT\\u2019s efficiency:\\n\\n- **Overall Speedup:** Reflects the total wall-clock speedup, including both the optimization and acceleration phases.\\n- **Instance Speedup:** Captures the speedup achieved for each individual instance.\\n\\nThe results demonstrate that SWIFT\\u2019s overall speedup progressively increases as more tokens are generated, eventually converging toward the average instance speedup. This dynamic highlights a key feature of SWIFT: ***its efficiency scales with increasing input length and the number of instances***, making it particularly advantageous in large-scale inference scenarios.\\n\\n[1] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. Zhang et.al. ACL 2024.\\n\\n\\n**To sum up:**\\n\\n\\nIn the discussion above, we have primarily clarified some misunderstandings (**A4, A6**), added experiments based on your interests (**A2, A3**), and conducted comparisons with prior work (**A1**). We sincerely thank you for your thoughtful suggestions and detailed feedback, which we have integrated into our paper without altering its original content. We hope our responses have effectively addressed your concerns.\\n\\nIf you have any further questions or additional concerns, please feel free to discuss them with us. Additionally, we kindly request that you reconsider your rating in light of our responses. From our understanding, you hold a positive view of our work, and we believe the suggestions you raised have been appropriately addressed within the revised manuscript.\"}",
"{\"title\": \"Response - 2\", \"comment\": \"***Q2: The author only compared to the baseline on the Llama and CodeLlama models. I believe experiments should be conducted on larger models with different architectures to demonstrate the generalization of the method.***\", \"a2\": \"We would like to clarify that **we have already included the experimental results for Yi-34B and DeepSeek-Coder-33B**, along with their instruction-tuned variants in our paper, as presented in Figure 9. The detailed experimental results are illustrated in Appendix C.2. The results indicate that SWIFT achieves **efficiency improvements** ranging from **26% to 54%** on these LLM backbones, which substantiates the generalized utility of SWIFT as a general-purpose, plug-and-play SD method, offering promising inference acceleration across diverse LLM backbones.\\n\\nTo further demonstrate the generalization ability of SWIFT, **we have consolidated all results related to diverse LLM backbones below** for your convenience. If you have further suggestions for additional backbones to evaluate, please feel free to propose them, and we will gladly incorporate these in our future analyses.\\n\\n> R4-Table2: Experimental Results on CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| Models | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| ----------------------- | :--: | :--: | :------: | :-----: |\\n| LLaMA-2-70B | 0.50 | 3.85 | 0.99 | 1.43x |\\n| LLaMA-2-70B-Chat | 0.50 | 3.43 | 0.85 | 1.31x |\\n| LLaMA-3-70B | 0.40 | 5.43 | 0.99 | 1.41x |\\n| LLaMA-3-70B-Instruct | 0.40 | 3.76 | 0.95 | 1.33x |\\n| CodeLLaMA-34B | 0.50 | 3.79 | 0.88 | 1.46x |\\n| Yi-34B | 0.45 | 2.74 | 0.94 | 1.30x |\\n| Yi-34B-Chat | 0.45 | 2.84 | 0.91 | 1.29x |\\n| DeepSeek-Coder | 0.50 | 4.97 | 0.99 | 1.54x |\\n| DeepSeek-Coder-Instruct | 0.50 | 3.80 | 0.88 | 1.39x |\\n\\nThese results further validate **SWIFT\\u2019s generalization ability** across a broader range of LLMs, including both chat and instruction-tuned variants.\"}",
"{\"title\": \"Response - 2\", \"comment\": \"***Q2: How does SWIFT handle non-stationary input distributions during Bayesian optimization?***\", \"a2\": \"Thank you for your question regarding SWIFT\\u2019s handling of non-stationary input distributions during Bayesian optimization.\\n\\n**SWIFT is specifically designed to handle dynamic input data streams, as discussed in Lines 474\\u2013495 of our manuscript.** It employs a dynamic mechanism that adaptively triggers the optimization phase whenever the token acceptance rate falls below 0.93. This ensures that SWIFT can optimize on-the-fly for each domain during inference, without relying on extensive pretraining.\\n\\nTo assess its performance, we conducted experiments on various tasks, including summarization, reasoning, instruction following, translation, and question answering. For each task, we sampled 500 instances from the respective test sets and concatenated them sequentially to create a dynamic input stream simulating domain shifts.\\n\\n**Key Results (Figure 7):**\\n\\n- SWIFT exhibited strong adaptability across domains, achieving an ***average token acceptance rate of 96%*** and maintaining consistent ***speedups of 1.3x\\u20131.6x***.\\n- By comparison, Self-SD struggled with domain shifts, showing a significant drop in the average token acceptance rate from 92% to 68%. This led to a sharp reduction in speedup, declining from 1.33x to an average of 1.05x under domain-shifted conditions.\\n\\nThese results underscore SWIFT\\u2019s ability to dynamically adapt to non-stationary task distributions while maintaining efficiency and performance. In the revised manuscript, we will expand on this dynamic evaluation to provide further clarity and detail.\\n\\n\\n\\n***Q3: Could the authors provide insights into how SWIFT performs under extreme token count variations or highly domain-specific tasks?***\", \"a3\": \"Thank you for your insightful question! As highlighted in Section 4, SWIFT is designed to ***dynamically optimize its acceleration performance*** by adapting to the characteristics of the current data stream. A detailed analysis of its acceleration performance is provided in Figure 6, which demonstrates that SWIFT\\u2019s overall speedup progressively increases as more tokens are generated, ultimately converging toward the average instance speedup. This behavior underscores one of SWIFT\\u2019s key features: ***its efficiency scales with increasing input length and the number of instances***.\\n\\nThis dynamic optimization mechanism also makes SWIFT particularly effective for ***highly domain-specific tasks*** involving large volumes of homogeneous data, such as those found in specific test sets or real-world industrial applications. In such scenarios, SWIFT can continuously refine its skipped layer configuration on the fly, enabling it to approach the optimal configuration for the domain-specific data.\\n\\nWe greatly appreciate your feedback and hope this explanation clarifies SWIFT\\u2019s adaptability and versatility under domain-specific conditions. Let us know if further details are needed!\\n\\n\\n\\n**To sum up:**\\n\\nIn the discussion above, we have provided detailed comparisons with prior work (**A1**) and clarified key features and strengths of SWIFT in response to your inquiries (**A2, A3**). We sincerely appreciate your thoughtful suggestions and detailed feedback, which we will incorporate into our paper while preserving its original contributions.\\n\\nWe hope our responses have effectively addressed your concerns. Should you have any additional questions or further feedback, we would be happy to continue the discussion. Additionally, we kindly ask you to **reconsider your rating** in light of our responses. Based on your comments, we understand that you have a positive view of our work, and we believe the points you raised have been thoroughly addressed in our responses.\\n\\nThank you again for your valuable input, and we look forward to further discussions.\"}",
"{\"title\": \"Response - 4\", \"comment\": \"***Q4: However, they don't seem to be evaluating their method in a dynamic setting where the underlying task (distribution) changes over time. In such a dynamic setting, would SWIFT have to interleave the optimization and acceleration phases? Would one still observe a good amount of speed up in such settings?***\", \"a4\": \"Thank you for this insightful question. **If we understand your concern correctly, we have indeed validated SWIFT\\u2019s effectiveness in handling dynamic input data streams, as detailed in Lines 474\\u2013495 of our manuscript.** SWIFT incorporates a dynamic mechanism that adaptively triggers the optimization phase whenever the token acceptance rate falls below 0.93. This enables SWIFT to optimize on-the-fly for each domain during inference without requiring extensive pretraining.\\n\\nTo evaluate its performance, we conducted experiments across various tasks\\u2014summarization, reasoning, instruction following, translation, and question answering. For each task, we sampled 500 instances from the respective test sets and concatenated them sequentially to form a dynamic input stream. \\n\\n**Results in Figure 7:**\\n\\n- SWIFT demonstrated strong adaptability across domains, achieving an ***average token acceptance rate of 96%*** and maintaining a consistent ***1.3x\\u20131.6x speedup***.\\n- In contrast, Self-SD was highly sensitive to domain shifts, with its average token acceptance rate dropping from 92% to 68%. This decline resulted in a severe reduction in speedup, falling from 1.33x to an average of 1.05x under domain shifts.\\n\\nThese findings highlight SWIFT\\u2019s ability to dynamically adapt to changing task distributions while maintaining efficiency. We will further elaborate on this dynamic evaluation in the revised manuscript.\\n\\n***Q5: Looking at the ablation studies in Appendix D (Table 7), it appears that dynamic verification does not bring much value as the loss in overall speed-up is minimal when one excluded dynamic verification (1.560x to 1.541x). Could authors comment on this?***\", \"a5\": \"Thank you for pointing this out! Upon review, we identified a numerical typo in our manuscript. Specifically, '*dynamic ver*' refers to the confidence-based top-k draft candidate extension in SWIFT\\u2019s inference strategy. Excluding this mechanism results in a vanilla verification strategy similar to Self-SD, leading to a more substantial reduction in speedup\\u2014from 1.56x to 1.34x, not 1.541x as previously reported.\", \"we_provide_the_corrected_comparison_results_below\": \"> R2-Table6: Ablation Results on LLaMA-2-13B, CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| Methods | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| ----------------------- | :--: | :--: | :------: | :-----: |\\n| Self-SD | 0.43 | 4.02 | 0.85 | 1.29x |\\n| SWIFT *w/o dynamic ver* | 0.45 | 4.39 | 0.90 | 1.34x |\\n| SWIFT | 0.45 | 5.82 | 0.98 | **1.56x** |\\n\\nThese corrected figures demonstrate that dynamic verification meaningfully improves both the token acceptance rate ($\\\\alpha$) and speedup, underscoring its value in our inference strategy. We will ensure this correction is accurately reflected in the revised manuscript. \\n\\nThank you again for your careful and thorough review. Your detailed feedback has helped us identify and address this issue, and we greatly appreciate the time and effort you put into evaluating our work. We hope that this correction clarifies our findings and ensures that this oversight does not negatively influence your evaluation of the manuscript.\"}",
"{\"title\": \"Follow-Up: Seeking Further Feedback\", \"comment\": \"Dear Reviewer, I hope you're doing well. Following up on our recent exchange regarding this paper, I wanted to check if there are any further concerns or feedback from your side. Your insights are invaluable to us, and we're keen to address any remaining issues.\"}",
"{\"title\": \"Response - 3 (1)\", \"comment\": \"***Q3: Compared with Self-SD, the innovation is still insufficient, for example, the confidence-aware inference strategies are similar to some mechanism in Self-SD[1] and EAGLE[2].***\", \"a3\": \"Thank you for raising this question regarding the innovations of SWIFT compared to Self-SD [1]. Below, we provide a detailed explanation to address this concern:\\n\\n------\\n\\n**(1) Self-SD necessitates substantial optimization latency, making it unsuitable for plug-and-play LLM inference acceleration.**\\n\\nAs detailed in Section 3 (L148\\u2013174) of our manuscript, Self-SD was the first work to explore layer-skipping drafting within the Speculative Decoding (SD) paradigm, which proposes utilizing a Bayesian Optimization process before inference to determine the skipped layer set for efficienct drafting. While this method shows promising efficacy, it nessisitates **substantial computational overhead and optimization latency** (~7.5 hours for LLaMA-2-13B and ~20 hours for LLaMA-2-70B), rendering it **unsuitable for plug-and-play LLM inference acceleration scenarios**.\\n\\nTo further illustrate Self-SD's **optimization latency**, we conducted an experiment varying the number of Bayesian optimization iterations (Self-SD uses 1000 iterations by default). The results are shown below:\\n\\n> R4-Table3: Experimental Results on LLaMA-2-13B, CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| #Bayesian_Opt | Optimization Latency (s) | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| :------------: | :----------------------: | :--: | :--: | :------: | :-----: |\\n| 0 | 0 | 0.50 | 1.75 | 0.56 | 0.96x |\\n| 10 | 279 | 0.49 | 1.83 | 0.57 | 0.97x |\\n| 50 | 1474 | 0.49 | 1.80 | 0.61 | 1.02x |\\n| 100 | 2898 | 0.45 | 3.04 | 0.80 | 1.19x |\\n| 200 | 5517 | 0.48 | 3.47 | 0.84 | 1.24x |\\n| 1000 (default) | 27071 | 0.43 | 4.02 | 0.85 | 1.29x |\\n\\nFrom the table, we observe that Self-SD achieves **negligible speedup improvement** with **fewer than 50** Bayesian optimization iterations (nearly equivalent to unified skipping, *#Bayesian Opt = 0*). At 100 iterations, Self-SD achieves a 1.19x speedup, but its optimization latency is nearly **25 times** that of SWIFT (1 hour *vs.* 2 minutes).\\n\\nTo further evaluate Self-SD\\u2019s performance under **the plug-and-play requirement** (i.e., optimization latency under 2 minutes), we conducted additional experiments:\\n\\n> R4-Table4: Experimental Results on LLaMA-2-13B, CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| Methods | #Bayesian_Opt | Opt_Time (s) | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| --------------------------------- | :-----------: | :----------------: | :--: | :--: | :------: | :-----: |\\n| Self-SD (default) | 1000 | 27071 (~7.5 hours) | 0.43 | 4.02 | 0.85 | 1.29x |\\n| Self-SD (for plug-and-play usage) | 5 | 155 (~ 2.5 mins) | 0.50 | 1.80 | 0.57 | 0.97x |\\n\\nUnder the plug-and-play constraint (optimization latency < 2 minutes), Self-SD's speedup effect **drops significantly**, resulting in a **negative acceleration speedup (0.97x)**. This demonstrates that the substantial optimization overhead of Self-SD makes it ***an invalid solution*** for plug-and-play LLM inference acceleration and highlights ***the great challenges*** to develop plug-and-play SD methods with layer-skipping drafting.\\n\\n------\\n\\n**(2) We propose the first plug-and-play layer-skipping SD method, introducing significant innovations to the layer set optimization strategy.**\\n\\nBelow, we detail **the contributions of SWIFT over Self-SD**, as discussed in L168\\u2013L174 & Section 4 of our manuscript:\\n\\n- **Optimization Objective Granularity:** Self-SD calculates its optimization objective at a multi-sample level, requiring sequential decoding of all selected training samples (e.g., 8 samples with 32 tokens each) for every iteration to optimize Equation (1). In contrast, SWIFT adopts a **step-level optimization objective**, dynamically optimizing the layer set at each decoding step, which significantly reduces computational overhead.\\n- **Bayesian Optimization Complexity:** The computational complexity of Bayesian optimization grows substantially with the number of iterations. SWIFT mitigates this burden by combining **random search** with **interval Bayesian optimization**, which accelerates convergence while reducing the overall computational complexity of the optimization process.\\n\\nThese innovations enable SWIFT to optimize the skipped layer set of the target LLM **on the fly**, delivering LLM inference acceleration as a **plug-and-play SD solution**. Additionally, as you noted, SWIFT seamlessly integrates multiple advanced SD techniques, including tree drafting and confidence-aware candidate pruning, further enhancing its efficiency for practical inference acceleration.\\n\\n------\"}",
"{\"title\": \"Response to Reviewer LGVh\", \"comment\": \"Thanks for your prompt response. We are glad to hear that some of your concerns have been addressed. And we appreciate your articulation of the reasons for maintaining your current score. However, we believe there are still some **critical misunderstandings** in your evaluation. Below, we provide further clarifications and additional experiments to address them comprehensively.\\n\\n------\\n\\n**1.Clarifications on Key Misunderstandings**\\n\\n**(a) SWIFT without optimization is only a naive baseline.**\\n\\n\\nThe \\\"*SWIFT without optimization \\u2013 using a unified layer-skipping pattern*\\\" represents only a **naive baseline** and serves as the starting point in Figure 6. In contrast to this static setting, **the core innovation** of SWIFT lies in its ability to **dynamically optimize the skipped layer configuration on the fly**. As shown in Figure 6, this optimization process rapidly improves both matchness scores and instance speedup **within the first few decoding steps**, significantly outperforming the static baseline. The optimization continues to enhance speedup throughout the inference process.\\n\\n**(b) Misinterpretation of SWIFT\\u2019s applicability to real-world applications.**\\n\\nThere seems to be another misunderstanding that \\\"SWIFT *could not perform any optimization* in real-world LLM chat applications.\\\" To address this, we emphasize that **a key innovation in SWIFT** is its ability to perform skipped layer optimization **at the step level**. Even during inference with a single input instance, SWIFT can perform optimization at early LLM decoding steps, adapting to the current instance and improving upon the static unified skipping pattern. This step-level optimization mechanism ensures SWIFT's applicability across general inference cases, including real-world chat-model applications and domain-specific tasks.\\n\\n**(c) Complementarity with Lookahead Decoding.**\\n\\nIt is important to note that SWIFT is an **orthogonal and complementary method** to Lookahead Decoding [1]. The two approaches can be combined to amplify their respective efficiencies. Furthermore, as you recognized, even our starting point (the naive baseline) achieves comparable efficiency to Lookahead. With our proposed optimization mechanism, SWIFT delivers **10%\\u201320% higher efficiency gains** compared to Lookahead.\\n\\n------\\n\\n**2.Additional comparisons on MT-Bench**\\n\\nTo address your concerns regarding SWIFT\\u2019s performance on other benchmarks, we conducted additional evaluations on MT-Bench using Vicuna-v1.3, a widely adopted LLM for chat applications. The results are as follows:\\n\\n> R3-2-Table1: Experimental results on Vicuna-7B-v1.3 (Greedy Decoding, FP16 Precision)\\n\\n| Methods | Writing | Roleplay | Reasoning | Math | Coding | Extraction | Stem | Humanities | Overall |\\n| --------- | :-------: | :-------: | :-------: | :-------: | :-------: | :--------: | :-------: | :--------: | :-------: |\\n| Lookahead | 1.07x | 1.12x | 1.09x | 1.21x | 1.17x | 1.14x | 1.12x | 1.15x | 1.13x |\\n| SWIFT | **1.22x** | **1.27x** | **1.23x** | **1.22x** | **1.28x** | **1.35x** | **1.20x** | **1.23x** | **1.25x** |\\n\\n> R3-2-Table2: Experimental results on Vicuna-13B-v1.3 (Greedy Decoding, FP16 Precision)\\n\\n| Methods | Writing | Roleplay | Reasoning | Math | Coding | Extraction | Stem | Humanities | Overall |\\n| --------- | :-------: | :-------: | :-------: | :-------: | :-------: | :--------: | :-------: | :--------: | :-------: |\\n| Lookahead | 1.08x | 1.17x | 1.10x | 1.19x | 1.15x | 1.16x | 1.09x | 1.14x | 1.14x |\\n| SWIFT | **1.24x** | **1.31x** | **1.29x** | **1.24x** | **1.35x** | **1.45x** | **1.28x** | **1.30x** | **1.31x** |\\n\\nThese results demonstrate that SWIFT **outperforms** Lookahead across all MT-Bench subtasks, achieving substantial gains in overall efficiency. Additionally, we would like to note that each subtask in MT-bench is limited to 10 instances. In real-world LLM chat applications, as we addressed in the prior response, by caching optimal settings and continually optimizing on similar input instances, the efficiency of SWIFT could be further enhanced.\\n\\n[1] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding. Fu et.al. ICML 2024.\\n\\n------\\n\\n**To sum up:**\\n\\nThe above clarifications and additional evidence underscore SWIFT\\u2019s contributions as the **state-of-the-art plug-and-play SD method**. It not only provides an orthogonal complement to Lookahead but also demonstrates superior efficiency gains across various benchmarks. If you acknowledge this claim, we believe the above reasons for negative scores have been well addressed and we hope that you will **reconsider the basis for the current score**. If you have any further concerns about the above claim or additional reasons for maintaining the negative scores, please let us know. We are eager to have a deep discussion with you.\"}",
"{\"comment\": \"Thank you for your thoughtful feedback and revisiting your evaluation of our work. We sincerely appreciate your dedication and hard work throughout the review process. Wishing you a pleasant day ahead.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response - 1\", \"comment\": \"We sincerely thank Reviewer vnfL for the positive feedback, and we deeply appreciate the time and effort you have dedicated to reviewing our submission. We are encouraged by the recognition of *our efforts to demonstrate the efficacy of SWIFT through experiments*. We are also delighted to know that you find our manuscript to be *well-written* and *fluid in its presentation*, and that you appreciate SWIFT's integration of *cutting-edge* techniques to enhance its practical performance.\\n\\nBelow, we provide detailed responses to your comments:\\n\\n\\n\\n***Q1: The author should compare their method with Self-SD[1] in table 2, since their method is an improvement of the latter.***\", \"a1\": \"We appreciate your inquiry of comparisons between SWIFT and Self-SD [1]. We provide the comparison results below. In addition to reporting the overall speedup, we provide key metrics including the skip ratio ($r$), mean accepted tokens (*M*), and token acceptance rate ($\\\\alpha$) for comparison. The relationship among these metrics and the expected wall-clock speedup is explained in Equation (6) of Appendix B.3.\\n\\n> R4-Table1: Experimental Results on CNN/DM (Greedy Decoding, FP16 Precision)\\n\\n| Methods | Plug-and-Play | Optimization Latency | $r$ | *M* | $\\\\alpha$ | Speedup |\\n| --------------- | :-----------: | :------------------: | :--: | :--: | :------: | :-------: |\\n| Self-SD | No | ~7.2 hours | 0.43 | 4.02 | 0.85 | 1.29x |\\n| Self-SD *w/ CA* | No | ~7.2 hours | 0.43 | 5.69 | 0.98 | 1.52x |\\n| SWIFT | Yes | **~2 minutes** | 0.45 | 5.82 | 0.98 | **1.56x** |\\n\\n> *CA* refers to our proposed Confidence-aware inference Acceleration strategy in Section 4.2.\\n\\nSelf-SD necessitates a time-intensive Bayesian Optimization process before inference (~7.5 hours for LLaMA-2-13B and ~20 hours for LLaMA-2-70B). In contrast, SWIFT introduces an on-the-fly optimization strategy, resulting in an approximate **200X reduction in optimization latency** while maintaining a **1.56x speedup**. We further augmented Self-SD with our *Confidence-aware inference Acceleration strategy* (Self-SD *w/ CA*). Even compared to this augmented version, SWIFT achieves competitive speedups.\\n\\nWe provide further comparative analysis of SWIFT versus Self-SD in **A2&A3 to Reviewer tWD9 (R2)**, discussing speedups, computational overhead, and performance with limited optimization iterations. These results and discussions will be incorporated into the revised manuscript. We sincerely appreciate your inquiry, which allowed us to strengthen the comparative analysis of our work.\\n\\n\\n\\n[1] Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding. Zhang et.al. ACL 2024.\"}",
"{\"title\": \"Response - 1\", \"comment\": \"We sincerely appreciate your thoughtful and professional comments. We are delighted by your recognition of our main idea, from its motivation to the experimental validation of \\\"the great potential of LLMs for self-acceleration without additional model parameters or task-specific training.\\\" We are also encouraged by your acknowledgment of our two key empirical observations, as we believe basing our work on experimental evidence provides a strong foundation for further exploration.\\n\\nWe are also grateful for your remarks on the paper\\u2019s clarity, noting that our key ideas are presented with sufficient detail and that SWIFT offers meaningful *novelty* compared to prior SD methods. Your recognition of these aspects motivates us to continue refining and advancing this line of research.\\n\\nIn addition to your praise for the paper\\u2019s main contributions, we have carefully considered your constructive feedback and suggestions for clarification and additional experiments. We agree that these points enhance the robustness of our findings and further validate the main ideas without detracting from their significance.\\n\\nBelow, we provide detailed, point-by-point responses to each of your comments.\\n\\n***Q1: There is room for improvement in the discussion of related prior work. Given that Elhoushi et al. 2024 also leverage layer skipping during the drafting phase, a detailed discussion of this work is warranted. Furthermore, the authors may also want to cite Yang et al.***\", \"a1\": \"Thank you for this insightful feedback. LayerSkip [1] explores an innovative approach to self-speculative decoding by implementing early-exit drafting, where the LLM generates drafts using only its earlier layers and then verifies these drafts with the full-parameter LLM. To support this process, LayerSkip **necessitates a time-consuming training process** involving layer dropout and early exit losses, which, while effective, demands significant computational resources for either pretraining or task-specific fine-tuning (as compared in R2). Moreover, this training process **modifies the original output distribution of the target LLM**, potentially impacting the consistency and reliability of LLM generation outputs.\\n\\n> Similarly, PPD [2] also explores early-exiting drafting; however, rather than relying on a single language modeling classifier from the final layer, PPD investigates classifiers trained for each individual layer. \\n\\nIn comparison to LayerSkip [1], our proposed SWIFT selects intermediate layers of LLMs to skip *on the fly*, without requiring auxiliary models or additional training processes, making it a *plug-and-play* solution for accelerating LLM inference. Furthermore, SWIFT theoretically preserves the original output distribution of the target LLM, achieving a stable 1.3x-1.6x speedup without altering model behavior. We will integrate these points into a revised discussion on related work in our manuscript.\\n\\n\\n\\n[1] Layer Skip: Enabling Early Exit Inference and Self-Speculative Decoding. Elhoushi et.al. ACL 2024.\\n\\n[2] Predictive Pipelined Decoding: A Compute-Latency Trade-off for Exact LLM Decoding. Yang et.al. TMLR 2024.\"}"
]
} |
EKCubxFdOs | LLaMoCo: Instruction Tuning of Large Language Models for Optimization Code Generation | [
"Zeyuan Ma",
"Hongshu Guo",
"Jiacheng Chen",
"Zhiguang Cao",
"Yining Ma",
"Yue-Jiao Gong"
] | Recent research on optimization using large language models (LLMs) typically involves either iterative next-step solution seeking or directly prompting LLMs to generate critical optimization codes. However, these methods often suffer from low computational efficiency, high sensitivity to prompt design, and a lack of domain-specific knowledge. We introduce LLaMoCo, the first instruction-tuning framework designed to adapt LLMs for solving optimization problems in a code-to-code manner. LLaMoCo features a comprehensive instruction set that includes code-style problem descriptions as input prompts and robust optimization codes from expert optimizers as target outputs. We then develop a novel two-phase learning strategy with a contrastive learning-based warm-up to enhance convergence during instruction tuning. Extensive experiments demonstrate that a CodeGen (350M) model tuned by our LLaMoCo yields a powerful domain-specific model for generating expert-level optimizers, achieving superior performance compared to GPT-4 Turbo and other competitors on both synthetic and realistic problem sets. The trained model and the usage instructions are available online. | [
"Large Language Models",
"Instruction Tuning",
"Optimization Code Generation"
] | Reject | https://openreview.net/pdf?id=EKCubxFdOs | https://openreview.net/forum?id=EKCubxFdOs | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"tEiQesVqlu",
"qHMtxpdO5n",
"ppAL8tpf2e",
"nkTqS5hEjZ",
"nZl3ku8xfv",
"n5mWymrtGd",
"l6hOEXhe9P",
"kkORyGITjm",
"j2QfsVsG50",
"f7iGhT7HLh",
"do4T3FTe6D",
"cm0qhsYPN0",
"cfY01w61Rk",
"ZvS85dbdjl",
"Ur2SXZceQG",
"PdJXieEW19",
"Pawh3oLjQA",
"Nl2iTeTEE7",
"LgZXBYeOg5",
"L2tkXYgPL3",
"KlF7PdBZO2",
"KODzWsXO53",
"KNURgOAlld",
"FeCvUtyU3B",
"DG7HgItPCy",
"9VxxAkQD8j",
"7sRiGw9k6u",
"7n5JtD1UJV",
"6djUqateSu",
"5cv9KzsuPQ",
"5YKGFOClHa",
"5L3COnb5tR",
"5DB3VgiIET",
"4mVQf4xa5I",
"41fOocvo72",
"3eisuH6rfK"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment"
],
"note_created": [
1730698575093,
1732687055722,
1733135045110,
1732785652658,
1732946197092,
1732785552012,
1730030173523,
1733134959037,
1730031620291,
1733197400092,
1732305190322,
1732951738610,
1733380046786,
1732520824399,
1732958182243,
1732304981410,
1732685008301,
1729779107529,
1732613412319,
1733134821013,
1732304891816,
1732306225736,
1733193173655,
1732520794137,
1733281356281,
1732304619934,
1732305268954,
1732889517089,
1732947465095,
1732510326793,
1732962701073,
1732946453674,
1732304691186,
1732306272892,
1737523765002,
1732613663587
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6367/Reviewer_mqqR"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Reviewer_ugM1"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Reviewer_1QNj"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Area_Chair_Ksno"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Reviewer_ugM1"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Reviewer_mqqR"
],
[
"ICLR.cc/2025/Conference/Submission6367/Reviewer_eFRN"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Reviewer_1QNj"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Reviewer_ugM1"
],
[
"ICLR.cc/2025/Conference/Submission6367/Reviewer_ugM1"
],
[
"ICLR.cc/2025/Conference/Submission6367/Reviewer_eFRN"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission6367/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"This paper introduces LLaMoCo, a framework for fine-tuning general-purpose Large Language Models (LLMs) to generate optimization code through instruction tuning. The authors construct a specialized code-to-code instruction dataset tailored for optimization tasks. They enhance the training process with techniques such as contrastive warm-up, data augmentation via rephrasing, and balanced sampling. These methods are evaluated across three pre-trained models of different sizes (S, M, L), showing significant performance improvements. An ablation study further validates the effectiveness of the proposed techniques. Overall, the paper presents a promising approach to adapting LLMs for the specialized task of optimization code generation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Specialized Dataset Creation: The development of a tailored code-to-code instruction dataset is a significant contribution. It aligns the fine-tuning process closely with the target task and provides a valuable resource for future research in optimization code generation.\\n2. Innovative Training Enhancements: Implementing contrastive warm-up, data augmentation through rephrasing, and balanced sampling demonstrates a comprehensive strategy to improve model performance. These techniques address common challenges in model training, such as overfitting and data imbalance.\\n3. Comprehensive Evaluation and Analysis: Evaluating the framework across models of varying sizes offers insights into scalability and the impact of model complexity. The inclusion of an ablation study allows for a deeper understanding of how each training enhancement contributes to the overall performance.\", \"weaknesses\": \"1. Unexpected Performance Across Model Sizes: Table 1, 2 and 3 show that the performance of LLaMoCo-S, LLaMoCo-M and LLaMoCo-L are very similar. The results also show that LLaMoCo-S sometimes outperforms its larger counterparts (LLaMoCo-M and LLaMoCo-L), despite having significantly fewer parameters. This is counterintuitive and raises concerns about potential inefficiencies in leveraging larger model\\u2019s increased capacity.\", \"questions\": \"1. Investigate Model Performance Discrepancies: It would be beneficial to analyze why the smaller model occasionally outperforms larger ones. This could involve examining the training dynamics, learning rates, or potential overfitting issues in larger models. Providing insights or adjustments based on this analysis would strengthen the validity of the results.\\n2. Expand Baseline Comparisons: Could the authors add another baseline of ChatGPT o1-mini/o1-preview? Since o1-mini/o1-preview are reasoning/coding/math enhanced models. I expect it to perform better than ChatGPT 4o. These models are designed for coding tasks and would serve as competitive benchmarks to better evaluate LLaMoCo's performance. Incorporating such comparisons would contextualize LLaMoCo's performance within the broader landscape of code generation research. \\n3. Enhance Robustness Evaluation: Assessing the models on out-of-distribution samples or real-world optimization problems beyond the dataset used for training could demonstrate the generalization capabilities and practical applicability of LLaMoCo, which could alleviate/address the concern of \\u201cUnexpected Performance Across Model Sizes\\u201d.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We sincerely appreciate your positive feedback on our LLaMoCo! Thanks for the time and efforts you have contributed to improve our paper.\"}",
"{\"comment\": \"Dear Reviewer #1QNj:\\n\\nThe discussion period will end soon. We have provided point-to-point responses for your review comments. If you still have any concerns, we look forward to hearing from you and will address them before the discussion ends. \\n\\nBest regards, the authors\"}",
"{\"title\": \"Request for further feedback\", \"comment\": \"Dear reviewer #ugM1:\\n\\nSince the discussion period has been extended, we respectifully request your feedback on our responses. In these responses, we have conducted additional experiments and provided in-depth discussions to address your concerns. If there are any further concerns, we are eager to continue this discussion and address them. We look forward to hearing from you.\\n\\nBest regards,\\nthe authors\"}",
"{\"title\": \"Response to your futher feedback (part 1/2)\", \"comment\": \"We appreciate the reviewer for the timely feedback. We provide following point-to-point responses to address your remaining concerns.\\n\\n**[Q2.1, performance difference]**\\n\\nWe would clarify that the seemly \\u201cmodest\\u201d improvement is caused by the **normalized performance metric** (detailed in Appendix D), which scales optimization performance by the objective value range of the target problem instance. These ranges can be extremely large (e.g., $10^{20}$), causing performance differences to appear smaller than they are. For instance, in Table 1, the small model (LLaMoCo-S, 350M) achieves a performance of 81.843%, which seems \\\"similar\\\" to the medium model (LLaMoCo-M, 2.7B) at 83.369%. However, this 1.5% difference reflects a significant performance gap in absolute terms. The reason we normalize the optimization performance is that given various objective value scales in different problem instances, it is unreasonable to compute the absolute average final objective value across problem instances. Additionally, we believe that such normalization improves the table readability of the results. \\n\\n**[Q2.2, what does LLaMoCo learn?]**\\n\\nWe would clarify that LLaMoCo is not merely learning the code fomat for problem-solving. Instead, it learns a comprehensive model that generates effective and executable optimizer code for solving optimization problem. We explain this from four aspects:\\n\\n1. **LLaMoCo learns how to understand an optimization problem**. We train LLaMoCo to generate optimization code in an end-to-end style based on the stipulated problem description format. By doing this, LLaMoCo learns to understand the language description of the given optimization problem including the objective, searching range, number of function evaluations, number of dimensions and additional contraints during its training. \\n2. **LLaMoCo learns how to select a suitable optimizer for the given problem**. Thanks to the end-to-end instruction set we have constructed, LLaMoCo is capable of locating the most effective optimizer for the given optimization problem according to its understanding of that problem. Unlike algorithm selection methods, which merely choose the best algorithm from a predefined pool, LLaMoCo generates complete optimizer source codes. These codes not only specify the selected algorithm but also include the necessary implementation details, ensuring compatibility with various optimization problem descriptions. \\n3. **LLaMoCo learns how to configure the optimizer properly**. In specific, LLaMoCo performs hyperparameter tuning as part of the optimization code generation process. This originates from the fine-grained grid search-based benchmarking when we construct the instruction-tuning set (Section 3.1, lines 233-240). By tailoring hyperparameter values to the specific problem instance, we train LLaMoCo to provide a level of configurability according to its understanding about the problem isntance.\\n4. **LLaMoCo learns how to address the unique semantic alignment issue in optimization domain**. In LLaMoCo, when we represent an optimization problem in a stipulated language format, the difference between two significantly different problems might be narrowed down (as we described in Section 3.2, lines 263-270). This motivates us to train LLaMoCo by the proposed contrasitive warmup first to align the semantic differences between diverse problem instances, hence improving the learning effectiveness of the subsequent SFT process. \\n\\n**[Q1, 350M model\\u2019s performance]**\\n\\nWe would argue that according to the results we presented in Table 1, the optimization codes generated by our LLaMoCo-S (350M CodeGen model) show competitive error rate compared with larger LLaMoCo models and signicantly superior error rate to the general LLMs baselines we have compared, which demonstrates the consistent training effectiveness of LLaMoCo on foundation models with different capacities. To demonstrate LLaMoCo\\u2019s effectiveness and executability, we provided in the original paper:\\n\\n1. some generation examples of LLaMoCo-S (350M) in Appendix F.1, F.2 and F.3 , where this small model is capable of generating fully executable and effective optimizer programs for unconstrained problems, constrained problems and realistic problems. We respectifully request the reviewer to check for these demonstrations. \\n2. an anonymous tutorial project accessible online (https://anonymous.4open.science/r/LLaMoCo-5125), where we also provide this 350M model and step-by-step generation codes for further validation. We respectifully request the reviewer to run the \\u201cLLaMoCo_For_Review.ipynb\\u201d file to validate the robust effectiveness and executability of LLaMoCo-S. You can replace the showcase problem we used there by your optimization problem following the stipulated format there and observe the consistent correctness of the generated code.\"}",
"{\"title\": \"Request for further feedback\", \"comment\": \"Dear reviewer #1QNj:\\n\\nSince the discussion period has been extended, we respectifully request your feedback on our responses. In these responses, we have conducted additional experiments and provided in-depth discussions to address your concerns. If there are any further concerns, we are eager to continue this discussion and address them. We look forward to hearing from you.\\n\\nBest regards,\\nthe authors\"}",
"{\"summary\": \"This paper presents LLaMoCo, a pioneering framework that maps optimization problem descriptions directly to expert-level optimization code through instruction tuning. By creating a comprehensive dataset of (problem, best-solver) pairs and using a two-phase training strategy, even a small model (350M parameters) can surpass GPT-4 in selecting and generating appropriate optimizers for both synthetic and realistic optimization tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. 350M parameter model achieves 81.8% optimization performance vs GPT-4's 74.2% (without prompting) while using only 2.4K tokens vs 3.5K tokens.\\n2. Data pipeline converts 6000 problems to 32570 training pairs through systematic benchmarking of 23 optimizers across different configurations.\", \"weaknesses\": \"1. Zero-shot evaluation tested on only 8 realistic problems, requiring more cases to validate the claims.\\n2. GPT-4 baseline with vector search not evaluated.\\n3. Grid search necessity on original problems is subtle, some parameters are hard to set without careful data observation, requiring further validation of selection appropriateness.\", \"questions\": \"1. Further experiments needed to demonstrate the combined effect of SFT and alignment.\\n2. Grid search \\\"best\\\" performance criteria not clearly defined, benchmarking process lacks clear evaluation metrics for optimizer selection.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer #eFRN:\\n\\nThe discussion period will end soon. If you still have any concerns, we look forward to hearing from you and will address them before the discussion ends. \\n\\nBest regards, the authors\"}",
"{\"summary\": \"This paper proposes a data generation and instruction tuning method for optimization-problem-solving LLMs. The authors conduct comprehensive experiments to demonstrate the optimization capabilities of the instruction-tuned LLMs and analyze the contribution of each component of the method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper introduces the first complete framework for training LLMs to solve optimization problems, including instruction-tuning dataset construction and detailed methods for training. The method is well-described and effective, making a significant contribution to the optimization community.\\n\\n2. The experiments demonstrate performance improvements on both synthetic and realistic problem sets. across different scales of LLMs, highlighting the generalization and effectiveness of LLaMoCo.\", \"weaknesses\": \"1. Lack of sufficient novelty. Several key components of the method follow prior work [1-3], particularly the instruction-tuning approach (Section 3.2), which reduces its originality. Although this paper introduces the first instruction-tuning framework for optimization tasks, it primarily applies standard training techniques. The authors should emphasize their main innovations more clearly in the paper.\\n\\n2. Writing. Figure 1 does not effectively highlight the main differences between LLaMoCo and previous methods, which is overly simplified. The authors should include more details of the method. There are typos in the caption of Figure 2 (wither -> either). The capitalization of \\u201cLaTeX\\u201d in the full paper is inconsistent.\\n\\n[1]Problem definitions and evaluation criteria for the cec 2021 on single objective bound constrained numerical optimization.\\n\\n[2]Unixcoder: Unified cross-modal pre-training for code representation.\\n\\n[3]Exploring the limits of transfer learning with a unified text-to-text transformer.\", \"questions\": \"1. What is the impact of dataset size on the training performance? Will the performance of the models continue to improve when using more data?\\n\\n2. How to control the quality of the synthesized tasks? Can we ensure that unsolvable optimization problems or problems with only trivial solutions are not synthesized?\\n\\n3. Why does the computational overhead of the trained models increase? (in Table 1)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We appreciate the reviewer for acknowleding our work's novelty and contributions. Thanks for your precious time, valuable comments and positive feedback!\"}",
"{\"title\": \"Response to Reviewer #ugM1 (part 1/2)\", \"comment\": \"We appreciate the reviewer for the valuable comments. Thank you for recognizing our work as pioneering, the superior performance of LLaMoCo compared to the GPT-4 model in solving optimization problems, and the systematic construction of our training dataset. To address your remaining concerns, we provide the following point-by-point responses.\\n\\n**[W1, add zero-shot evaluation]**\\n\\nWe understand the reviewer's concern regarding the importance of testing zero-shot evaluation on a large dataset. However, we believe there may be a misunderstanding. To clarify, the first six problems in Table 3 represent individual engineering problem instances, while the latter two\\u2014HPO-B and Protein-Docking\\u2014are extensive problem collections containing hundreds of instances. For example, the Protein-Docking collection consists of diverse protein-protein complexes with varying structures, each presenting a challenging optimization landscape. In total, the number of tested problem instances amounts to 6 + 128 + 128 = 262 rather than 8.\\n\\nTo further address your concern, we conducted additional testing of our trained model on a new realistic problem collection derived from the first six problems in Table 3. This collection, proposed by Kumar et al. [1], consists of 57 real-world constrained optimization problems sourced from a diverse range of engineering scenarios. The comparison results (averaged across all 57 problems) between our LLaMoCo and other baselines are presented in the following table:\\n\\n| | OPRO | LMEA | CodeGen-Mono350M | Phi-2-2.7B | DeepSeekMathInstruct-7B | GPT-4 Turbo | Code Llama-7B | Llama2-70B | LLaMoCo-S | LLaMoCo-M | LLaMoCo-L |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Err. | - | - | 99.487% | 98.131% | 68.921% | 40.148% | 99.344% | 99.473% | 5.984% | 5.479% | **5.359%** |\\n| Rec. | - | - | 81.166% | 58.546% | 15.470% | 16.791% | 58.101% | 59.189% | 10.648% | 10.486% | **10.198%** |\\n| Perf. | 31.832% | 27.549% | 26.477% | 33.460% | 58.044% | 61.468% | 49.481% | 47.146% | 84.462% | 87.279% | **88.135%** |\\n| Comp. | 253k | 298k | 2.1k | 2.1k | 2.1k | 3.6k | 2.0k | 1.9k | 2.6k | 2.6k | 2.6k |\\n\\nThe results further validate the effectiveness and superior performance of our LLaMoCo. We have added corresponding text content in lines 431-433 and the above results and discussion in Appendix E.2. \\n\\n[1] Kumar, Abhishek, et al. \\\"A test-suite of non-convex constrained optimization problems from the real-world and some baseline results.\\\"\\u00a0*Swarm and Evolutionary Computation*\\u00a056 (2020): 100693.\\n\\n**[W2, add GPT-4 vector search baseline]**\\n\\nWe have conducted a baseline experiment based on our understanding of \\u201cGPT-4 vector search.\\u201d Specifically, we utilized the GPT-4 vector embedding system to generate vectorized embeddings for all prompts in our training dataset. During testing, the tested prompt was also processed through the GPT-4 vector embedding system to obtain its vectorized embedding. We then identified the most similar prompt in the training dataset by calculating the L-2 distance between the embeddings. Finally, the tested prompt, along with the most similar prompt and its corresponding answer from the training dataset, were fed into the GPT-4 model to generate an optimizer code. In this setup, the most similar prompt and its corresponding answer serve as an example of in-context learning. We present the comparison results of this GPT-4 vector search baseline, GPT-4 baseline and our LLaMoCo-L on the test set $\\\\mathbb{I}_{eval}$ in the following table:\\n\\n| Baseline | GPT-4 Turbo | GPT-4 vector search | LLaMoCo-L |\\n| --- | --- | --- | --- |\\n| Err. | 41.667% | 9.336% | **5.509%** |\\n| Rec. | 13.072% | 12.853% | **10.461%** |\\n| Perf. | 74.248% | 79.944% | **83.451%** |\\n| Comp. | 3.5k | 7.1k | **2.4k** |\", \"two_key_observations_can_be_made\": \"a) Providing GPT-4 with an example prompt-answer pair similar to the tested prompt significantly reduces the error rate of the generated optimizer code. b) However, this prompting strategy consumes twice as many tokens as directly prompting GPT-4, making it inefficient\\u2014especially when compared to LLaMoCo, which requires only 2.4k tokens to achieve superior optimization performance. This highlights the importance of LLaMoCo in efficiently adapting LLMs to solve optimization problems. We have added the above results and discussion in Section 4.4, Table 4 of the revised paper (colorred in blue).\"}",
"{\"comment\": \"There are $N_p = 2570$ problem instances in the testing set $\\\\mathbb{I}_{eval}$ (We have constructed 32570 problems, 30000 of them are used for the instruction tuning, the rest are used for testing). For each problem $p_i$ in the testing set, we feed its prompt to LLaMoCo and obtain the generated optimizer program. We then run the program to optimize $p_i$. There are three cases to compute the performance of LLaMoCo on $p_i$, which we denote as $Perf_i$:\\n\\n1. If the generated optimizer code is executable with no errors, we run the program to optimize $p_i$ for 5 independent runs. Then the performance is calculated as $Perf_i = \\\\frac{1}{5} \\\\sum_{j=1}^{5} \\\\frac{f_{i,j}^* - f_i^*}{f_{i,j}^0 - f_i^*}$. Where $f_{i,j}^0$ is the initial objective value in j-th run, $f_{i,j}^*$ is the best objective value found by the generated optimizer code, $f_i^*$ is an approximation of the optimum of $p_i$ which we obtain from the large scale benchmarking. \\n2. If the errors can not be resolved within one turn of debug conversation, we set $Perf_i$ as 0.\\n3. If there are runtime errors, we prompt LLMs to debug this program. If the errors are resolved within one turn of debug conversation, we calculate a recovery cost $r_i = \\\\frac{L_{err}^i}{L^i}$ as the proportion of lines in the generated codes that need to be revised, where $L_{err}$ denotes the number of error lines, $L$ denotes the total number of lines. We then run the revised program to optimize $p_i$ for 5 independent runs and calculated the performance as $Perf_i = (1-r_i) \\\\times \\\\frac{1}{5} \\\\sum_{j=1}^{5} \\\\frac{f_{i,j}^* - f_i^*}{f_{i,j}^0 - f_i^*}$ . We give a punishment term to the normalized performance to reflect the errors raised in the generated optimizer code.\", \"we_provide_the_exact_formula_of_this_performance_metric_as_below\": \"$$\\nPerf_i = \\\\begin{cases}\\\\frac{1}{5} \\\\sum_{j=1}^{5} \\\\frac{f_{i,j}^* - f_i^*}{f_{i,j}^0 - f_i^*} & \\\\text{Case 1;}\\\\\\\\\\\\0 & \\\\text{Case 2;}\\\\\\\\\\\\\\\\(1-r_{i}) \\\\times \\\\frac{1}{5} \\\\sum_{j=1}^{5}\\\\frac{f_{i,j}^* - f_i^*}{f_{i,j}^0 - f_i^*} & \\\\text{Case 3.}\\\\end{cases}\\n$$\\n\\nAt last, the performance of LLaMoCo on testing set is the average across all problem instances:\\n\\n$$\\nPerf = \\\\frac{1}{N_p} \\\\sum_{i =1}^{N_p} Perf_i\\n$$\\n\\nWe thank the reviewer for this valuable comments, we will update the Appendix D to make the metric calculation more clear for our readers.\"}",
"{\"metareview\": \"### Summary of Claims and Findings\\nThe paper introduces LLaMoCo, a novel instruction-tuning framework for adapting large language models (LLMs) to generate optimization code directly from problem descriptions in Python or LaTeX. The framework features a curated dataset of optimization problems paired with expert-level solutions and a two-phase training strategy that includes a contrastive warm-up phase to align problem-solution representations. Experimental results demonstrate that LLaMoCo significantly outperforms GPT-4 Turbo and other baselines in optimization performance across synthetic and realistic tasks.\\n\\n### Strengths\\n1. **Framework Innovation**: LLaMoCo offers a structured approach to instruction tuning in the optimization domain, addressing challenges such as problem representation, model alignment, and data imbalance.\\n2. **Comprehensive Evaluation**: The paper demonstrates strong results on a diverse set of synthetic and real-world problems, coupled with scaling law analysis and ablation studies.\\n3. **Practical Impact**: The model's ability to generate efficient and effective optimization code has broad applications in engineering and optimization research.\\n\\n### Weaknesses\\n1. **Perceived Incrementality**: Reviewers noted that many components (e.g., instruction tuning, dataset construction, contrastive learning) leverage existing techniques. Despite the authors\\u2019 rebuttal, the domain-specific nature of the contributions remains under-emphasized.\\n2. **Limited Demonstration of Contrastive Warm-Up's Necessity**: While the warm-up strategy appears effective, its justification as domain-specific was unconvincing to some reviewers, with suggestions that it could apply broadly across domains.\\n3. **Modest Scaling Gains**: Performance improvements from smaller to larger models appeared incremental, raising questions about the optimization knowledge effectively learned.\\n\\n### Decision\\nWhile LLaMoCo represents a commendable effort to adapt LLMs for optimization code generation, the core novelty remains a concern. The primary contributions are seen as applications of existing methods rather than groundbreaking domain-specific innovations. Combined with lingering skepticism about the necessity and specialization of the proposed techniques, this submission falls marginally below the acceptance threshold.\", \"additional_comments_on_reviewer_discussion\": [\"**Novelty and Domain-Specificity**: Some reviewers revised their ratings upward, appreciating the authors\\u2019 explanations and additional experiments highlighting the dataset's construction and the training strategy's uniqueness. However, one reviewer remained unconvinced about the \\\"unique technical challenges\\\" of the optimization domain.\", \"**Experimental Updates**: Authors added new baselines (e.g., GPT-4 with vector search), zero-shot evaluations on 57 real-world problems, and scaling law experiments. These were positively received, with reviewers noting the effort and thoroughness.\", \"**Contrastive Warm-Up**: Despite its positive impact on performance, one reviewer suggested separating it into a standalone paper due to its potential generalizability.\"]}",
"{\"title\": \"Further Response (part 2/2)\", \"comment\": \"In a word, LLaMoCo indeed represents a novel sub-field of LLM for Optimization. The in-depth experimental observations (those already presented in the original version and those we have added according to all reviewers\\u2019 constructive suggestions) fully demonstrate the correctness and novelty of our special design efforts. The dataset (its format and construction), the training paradigm (contrastive warm-up + SFT) and the in/out-of-distribution evaluation procedure (four comprehensive performance metrics and each analysis module) all contribute to a systematic and easy-to-follow guideline for future practitioners, hence showing broad impact. **We really hope the reviewer could understand that to develop a comprehensive framework, LLaMoCo is unavoidably built upon a stack of advanced technologies, however, the efforts and insights behind these technical usage to make LLaMoCo efficient and effective are more important.**\\n\\n**[Contrastive warm-up]** \\n\\nGiven the further explanation above, we hope the reviewer could understand **the contrastive warm-up is exactly part of our chain-of-thoughts** to address the ultimate goal of LLaMoCo: generate effective optimizer for optimization problem at code-level. Such an interesting finding could benefit for other domain and definitely deserves further investigation (as you suggested). As we mentioned, we have highlighted in the added future work part in Section 5, lines 533-535 of the revised paper to notify future readers. \\n\\nAt last, we hope the above further clarifications could address your remaining concerns. We sincerely hope you will revisit your rating in light of this additional discussion. Thank you for your time and effort in reviewing our responses.\"}",
"{\"comment\": \"Thank you. But what I mean is: \\\"exact formula\\\", such as how the Comp. term is calculated, rather than an ambiguous description of \\\"average number of tokens (input+output)\\\", which is obviously unlikely to be used directly in calculations.\\n\\nWithout an exact formula, the effectiveness of the paper will be greatly reduced.\"}",
"{\"title\": \"Response to Reviewer #1QNj (part 2/2)\", \"comment\": \"**[Q1, impact of dataset size]**\\n\\nWe agree with the reviewer that an in-depth analysis of LLaMoCo\\u2019s scaling law would significantly enhance the impact of this work, considering both dataset size and model size. To this end, we selected the CodeGen family (350M, 1B, 3B, 7B) as the backbone model for LLaMoCo and conducted experiments with four training sets of varying sizes (1k, 5k, 15k, 30k). The optimization performance of these 16 trained models on the test set $\\\\mathbb{I}_{eval}$ is presented in the following table:\\n\\n \\n\\n| model/data | 1k | 5k | 15k | 30k |\\n| --- | --- | --- | --- | --- |\\n| 350M | 47.260% | 66.661% | 80.306% | 81.843% |\\n| 1B | 46.799% | 67.829% | 81.783% | 82.541% |\\n| 3B | 47.131% | 68.492% | 82.501% | 83.315% |\\n| 7B | 45.645% | 70.147% | 82.966% | **83.513%** |\", \"the_results_above_yield_several_key_observations\": \"a) When the dataset size is very small (1k), increasing the model size does not result in performance gains, likely due to overfitting. b) For all model sizes, increasing the dataset size consistently improves performance. c) In summary, both model size and dataset size play crucial roles in determining the final performance of LLaMoCo. We have added the above results and discussion in lines 380-382 and Appendix E.3 of the revised paper (colorred in blue).\\n\\n**[Q2, data quality control]**\\n\\nThe quality of the dataset is ensured by including only solvable and non-trivial problems. Specifically, for unconstrained problems, the composition and hybrid construction of base functions follow the procedure outlined in the IEEE CEC 2021 Single-Objective Competition, where the optimum and the optimal objective value of the constructed function are analytically derived. By applying rotation and shifting to the optimum, we modify the optimization landscape, ensuring the solution remains non-trivial. For constrained problems, we further validate solvability by running specialized optimizers from our algorithm pool on the constructed problem instances. These optimizers are executed multiple times (50 runs) to confirm the absence of constraint conflicts, ensuring that each problem instance is solvable. We have included this discussion in the Appendix A.3 of the revised paper (colorred in blue). \\n\\n**[Q3, computational overhead]**\", \"the_increased_token_consumption_by_llamoco_can_be_attributed_to_two_key_factors\": \"a) The baseline models we compare against tend to output very simple optimizers and sometimes incomplete codes due to their limited optimization knowledge. In contrast, LLaMoCo often generates more complex optimizer codes to achieve superior optimization performance. b) LLaMoCo includes user-friendly comments above each line of the generated code to help users understand and customize the content as needed, enhancing its flexibility. We kindly invite the reviewer to refer to Figures 7, 8, and 9 in Appendix F for a detailed examination of these two aspects. Nevertheless, as shown in Table 1, the increased token consumption in LLaMoCo is much less than the existing prompt for solution works such as OPRO (2.4k v.s. 115k) and prompt for optimizer works such as GPT-4 Turbo (2.4k v.s. 3.5k), which underscores the effectiveness and efficiency of LLaMoCo.\\n\\nWe hope the above responses could enhance your confidence in our work.\"}",
"{\"comment\": \"I appreciate the authors' efforts in addressing my questions. As most of my concerns have been satisfactorily addressed, I will update my score and recommend your paper for acceptance. Thank you for your detailed responses and clarifications!\"}",
"{\"summary\": \"This paper introduces LLaMoCo, a new framework for fine-tuning LLMs to solve optimization problems. The contributions of this paper are two-fold: (1) a novel fine-tuning dataset and (2) a new training warm-up strategy for training leveraging contrastive learning. Experimental evaluations demonstrate that LLaMoCo's models perform well on their held-out test set and realistic problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors have developed and plan to release a novel dataset designed to teach language models to solve optimization problems. This represents a significant contribution to both researchers and practitioners.\\n\\n2. The experimental results are compelling. I especially appreciate Table 3, where the proposed method strongly performs on realistic optimization problems (rather than toy problems).\", \"weaknesses\": \"1. This paper lacks novelty. This paper primarily focuses on fine-tuning OSS LLMs for a specific domain. The main approach is straightforward from this perspective: the authors adjusted prompts (specifically, framing problem descriptions in Python or Latex) and developed a new dataset. Could authors emphasize the unique technical challenges associated with this domain?\\n\\n2. The contrastive warm-up technique in this paper seems out of place. This technique does not appear to be specifically tailored to optimization problems. Could it be beneficial for fine-tuning in other domains as well? If not, what are the reasons? I would suggest separating this novel technique into a dedicated paper or clarifying how it suits the domain under discussion. The ablation study in Figure 4 is not very convincing, as it was tested with only a single configuration, making the results dependent on that specific setup.\", \"questions\": \"1. Is the current dataset format truly optimal? For instance, could leveraging CoT enhance performance? Similarly, would implementing multi-turn iterative improvements for optimization code be a promising approach?\\n\\n2. Could the proposed method be compared with previous non-LLM-based automatic algorithm selection approaches? Automatic algorithm selection for optimization problems is a well-established research area with a rich body of existing work.\\n\\n3. Could the specific technical challenges unique to this optimization domain be highlighted? (see Weakness 1)\\n\\n4. Would it be reasonable to separate the contrastive warm-up technique into a standalone paper or clarify that this technique is highly specialized for the domain under consideration? (see Weakness 2)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Request for further feedback\", \"comment\": \"Dear reviewer #mqqR, since the discussion period is extended, we respectifully request you to check the experimental restults and discussion we have added following your constructive suggestions. We look forward to your futher feedback to help us improve this paper. We are open to any suggestions from you. Thanks for your precious time!\"}",
"{\"comment\": \"Dear Reviewer #ugM1:\\n\\nSince the discussion period will end soon, we respectifully request for your further feedback. If you still have any concerns, we look forward to hearing from you and address them before the discussion ends. \\n\\nBest regards, the authors\"}",
"{\"title\": \"Response to Reviewer #1QNj (part 1/2)\", \"comment\": \"We appreciate the reviewer for acknowledging our LLaMoCo as the first complete framework making a significant contribution to the optimization community. We are pleased that the reviewer finds our paper well-described and effective, with solid experimental results demonstrating both generalization and effectiveness. We hope the following point-by-point responses address the remaining concerns.\\n\\n**[W1, novelty]**\\n\\nWe appreciate the reviewer\\u2019s concern and would like to clarify the unique innovations in LLaMoCo that set it apart from the referenced works [1], [2], and [3]:\\n\\n1. We emphasize that while the dataset construction incorporates the synthetic process outlined in [1], we significantly augment this process by carefully introducing random constraints collected from extensive convex optimization literature (as detailed in lines 200\\u2013202 and 213-215). This data augmentation ensures that the final dataset includes not only the unconstrained problems from [1] but also a diverse and novel set of constrained optimization problems. This enhancement plays a critical role in improving the generalization performance of LLaMoCo. It also provides valuable insights and inspiration for designing effective training datasets to fine-tune LLMs for potential other optimization tasks in future work.\\n2. we would clarify the difference between the contrastive learning introduced in LLaMoCo and the approach used in Unixcoder [2]. The contrastive learning in Unixcoder involves two components: a) aligning representations of different modalities by aligning the hidden dropout masks, and b) aligning code fragments with corresponding comments. However, neither of these components is similar to the contrastive learning approach in LLaMoCo. As described in Section 3.2 (lines 262\\u2013269), LLaMoCo addresses a novel language alignment task with several unique challenges specific to the optimization field, including a) different prompts (optimization problems) often share similar solutions (optimizers), and b) similar prompts require different solutions. To address them, our proposed contrastive learning warmup aligns the hidden vectors in the final self-attention block of decoder-only LLMs, rather than aligning hidden dropout masks as in Unixcoder. By our efficient contrastive warmup (only 5 epochs), the learning effectiveness of the subsequent SFT process is significantly improved as shown in Figure 3.\\n3. We argue that the tasks in [3], which focus on daily conversations (a relatively simple domain), are significantly less complex compared to the optimization tasks addressed in LLaMoCo, which are challenging even for human experts. In optimization tasks, constructing well-defined problem descriptions is significantly more challenging than handling daily conversations. To address this, we introduced a templated problem construction process and designed tailored prompt descriptions. Moreover, labelling optimization problems requires expert-level knowledge to identify and fine-tune well-performing optimizers and to write the corresponding code. To ensure robustness, we conducted large-scale benchmarking and grid searches to determine competitive hyperparameters. Additionally, the unique cross-modal challenges described above necessitate an effective learning paradigm, which motivated the development of our contrastive learning warmup method.\\n\\nWe will refine the introduction and methodology sections to highlight these discussions. We thank the reviewer for the valuable suggestions that strengthen our paper and enhance the discussion.\\n\\n**[W2, refining figures and writing]**\\n\\nWe have addressed the typos you mentioned in the revised paper. Regarding Figure 1, we have updated it in the revised paper, where we illustrate all sub-components of the instruction tuning process including the dataset construction, benchmarking, and the two-phase fine-tuning in the figure to show the technical novelty of LLaMoCo compared with existing works.\"}",
"{\"title\": \"Response to Reviewer #eFRN (part 1/2)\", \"comment\": \"We appreciate the reviewer for the valuable and constructive comments. We are grateful for your recognition of LLaMoCo as a significant contribution to the optimization domain, with compelling generalization performance across toy problems and realistic problems. Below, we provide point-by-point responses to address your remaining concerns.\\n\\n**[W1 & Q3, unique techinal challenges]**\\n\\nWe would like to clarify that fine-tuning LLMs for specific domains, such as the optimization domain addressed in this paper, presents unique and domain-specific challenges. The primary contribution of LLaMoCo lies in identifying these challenges and proposing novel methodologies to overcome them. Below, we emphasize the unique technical challenges of adapting LLMs for the optimization domain, highlighting our contributions through three key aspects:\", \"novelty_1\": \"LLaMoCo is the first instruction-tuning framework for adapting general LLMs as an efficient and effective optimization tool. Existing works, such as OPRO [1], primarily rely on iterative prompting of LLMs to optimize solutions. However, these approaches suffer from a) unsatisfactory optimization performance due to the limited domain-specific knowledge of general LLMs. b) inefficient inference modes, which consume an extremely large number of tokens. In contrast, LLaMoCo directly injects optimization knowledge into LLMs through instruction tuning, representing a novel and systematic approach to leveraging LLMs for optimization tasks. The subsequent novelties address the unique challenges of this process and directly target the limitations (a) and (b) of existing works.\", \"novelty_2\": \"Representing and collecting the optimization knowledge (Section 3.1). We propose a stipulated and unique code-to-code description format to represent optimization problems and their corresponding effective optimizers. This universal representation format facilitates automated data collection and simplifies the adaptation of LLMs to the optimization domain. To collect sufficient optimization knowledge, we have proposed a novel optimization problem generation process which is capable of synthesizing a large number of diverse optimization problem instances, which are further utilized by our proposed automated benchmarking procedure to obtain the most effective optimizer codes\\u2014the optimization knowledge.\", \"novelty_3\": \"Enhancing the training effectiveness (Section 3.2). We explicitly inject the obtained optimization knowledge into general LLMs through efficient instruction tuning. The unique challenges in this process are a) code similarity understanding issues (lines 262-269) and b) data imbalance issues (lines 299-301). We additionally introduce a contrastive warmup for aligning the code similarity, and an example-proportional mixing strategy to re-balance the training data, both of which enhances the training efficiency and stability.\\n\\nBesides, we have to note that the contribution of LLaMoCo is not limited to the above novel proposals. The dataset construction, the fine-tuning strategy and all observed empirical results would provide in-depth insights and a profound impact on the future development of a combination of LLMs and optimization domain. Following your valuable suggestion, we have refined the above discussion into the related works section in the revised paper to highlight these novelties (lines 125-128, 142-145, 169-173, 181-187, colorred in blue). \\n\\n[1] Chengrun Yang, et al. \\\"Large language models as optimizers.\\u201d arXiv preprint arXiv:2309.03409, 2023.\\n\\n**[Q2, compare with algorithm selection methods]**\", \"we_argue_that_comparing_llamoco_with_algorithm_selection_methods_may_not_be_entirely_appropriate_due_to_two_key_distinctions\": \"1. **Code Generalization Advantage**: Unlike algorithm selection methods, which merely choose the best algorithm from a predefined pool, LLaMoCo generates complete optimizer source codes. These codes not only specify the selected algorithm but also include the necessary implementation details, ensuring compatibility with various optimization problem descriptions. This level of generalization is crucial for real-world applications, where optimization problems often require customized code beyond the scope of standard algorithm selection.\\n2. **Hyperparameter Tuning**: LLaMoCo also performs hyperparameter tuning as part of the optimization code generation process. By tailoring hyperparameter values to the specific problem instance, LLaMoCo provides a level of configurability that algorithm selection methods cannot achieve. \\n\\nFollowing your suggestion, we have added some discussion in Section 4.1, lines 349-354 of the revised paper (colorred in blue), where we highlight the above technical differences between LLaMoCo and Algorithm Selection methods. We kindly request the reviewer to check for the updates.\"}",
"{\"comment\": \"I appreciate the authors for their responses!\\n\\nI have read the responses. I agree with the authors for their novelty and I'm satisfied with the experiment on data scaling.\\n\\nHowever, my knowledge in the optimization domain is limited and I cannot be sure how significant this work's contribution is to the optimization community. Thus, I decided to keep my score at 6 for acceptance in the field of code generation.\"}",
"{\"title\": \"Further Response (part 1/2)\", \"comment\": \"We appreciate for your timely feedback. We are very happy that we have addressed half of your concerns (Q1 and Q2). Let us further explain and address your remaining concerns in Q3 and Q4.\\n\\nFirst of all, we would like to highlight that the primary contribution of our paper lies in the design of a novel framework that accepts optimization problem descriptions at the Latex or Python code level and enables LLMs to direcctly generate codes for solving these problems. Developing this framework represented numerous challenges, which are not trivial\\u2014they involve fundamental issues in representing and interpreting optimization problems that LLMs can process effectively, as well as ensuring the generated solutions are accurate and efficient. To meet the unique requirements of this framework, we developed tailored adaptations and extensions of existing techniques (instruction tuning, dataset construction, and contrastive learning). We respectfully do not fully understand the reviewer\\u2019s comment regarding the \\\"truly domain-specific technical innovations\\\". The core challenge of our work lies in constructing the framework and identifying the fundamental problems within it. In such a context, reinventing the wheel in terms of technical methods would detract our focus from solving these high-level challenges effectively. \\n\\n**[Unique challenges in LLaMoCo]**\\n\\nBelow, we summarize each of the unique challenges encountered in developing LLaMoCo and the novel design adaption/extension of existing techniques we incorporated in a point-to-point way:\\n\\n1. Unique dataset construction.\\na) Challenge 1: **How to represent optimization problem in language description?** To address this, we proposed a stipulated problem formulation structure which facilitates the subsequent automated problem collection. \\n \\n b) Challenge 2: **How to efficiently attain sufficient optimization problem instances to facilitate training of LLMs?** To address this, we proposed a novel and **fully automated problem synthesizing procedure (in Section 3.1, 197-221)**, which not only eases the random combination of objective functions, but also provides diverse optional constraints. \\n \\n c) Challenge 3: **How to gather high-quality optimization knowledge for the training problem instances?** To address this, we have carefully prepared a high-performance algorithm pool and then proposed a **large-scale grid search-based benchmarking procedure** **(in Section 3.1, 222-240, and Appendix A)** to attain the most effective optimization code for each problem instance automatically. \\n \\n2. Unique instruction tuning process.\\n \\n a) Challenge 1: **How to deal with the specific semantic alignment issue raised in optimization domain?** As we mentioned in our previous rebuttal (W2 & Q4, a. b.), in optimization domain, there are two special phenomenons: First, some totally different problems might share the same effective optimizer. Second, when we present the optimization problem in python and latex code, a slight code modification would lead to different target optimizer. These two points raises unique challenge if we want to learn an effective code-to-code model in LLaMoCo. To address this, we introduced **contrastive warm-up (in Section 3.2, lines 262-298)** to allign the code-level semantics before the normal instruction tuning (SFT).\\n \\n b) Challenge 2: **How to deal with the data imbalance issue raised in optimization domain?** In optimization domain, a small number of optimizers might dominate optimization performance on a large number of problem instances, causing the imbalance in our training data. To address this, we propose using the **example-proportional mixing strategy (in Section 3.2, lines 299-312)** to re-balance the data distribution for training stability. \\n \\n3. Unique evaluation procedure.\\n \\n a) Challenge 1: **How to measure the performance of fine-tuned models in a comprehensive way?** To address this, we have designed the **four performance metrics (in Section 4.1, lines 355-365, and Appendix D)**: Code Error Rate (Err.), Code Recovery Cost (Rec.), Optimization Performance (Perf.), and Computational Overhead (Comp.) to evaluate all aspects of LLaMoCo. \\n \\n b) Challenge 2: **How to systematically analyse the capability of fine-tuned models?** To address this, we investigate the **scaling law (in Section 4.2, lines 405-411, and Appendix E.3)** of our LLaMoCo on different model sizes and training dateset sizes, as well as the **zero-shot performance** **(in Section 4.2, lines 412-446, and Appendix E.2)** on totally unseen realistic problems.\"}",
"{\"title\": \"Global Response\", \"comment\": \"We would like to express our sincere gratitude for the time and effort the reviewers and AC have invested in reviewing our paper. First of all , we are so honored that LLaMoCo has been recognized as a **significant contribution to optimization community** (Reviewer #mqqR, #1QNj and #eFRN) and **code generation community** (Reviewer #mqqR, #1QNj). We are also pleased to see the reviewers have commended LLaMoCo for its **novel framework** (Reviewer #mqqR, #1QNj and #eFRN), **valuable dataset proposal** (all reviewers), **innovative training paradigm** (Reviewer #mqqR, #1QNj and #eFRN), **comprehensive evaluation** (Reviewer #mqqR and #eFRN) and **superior optimization performance** (all reviewers).\\n\\nIn this global response, we primarily summarize common suggestions shared by the reviewers and provide an overview of the additional discussion with experimental results that address these suggestions, as follows.\\n\\n---\\n\\n**[Adding more baselines for comparison, Reviewer #mqqR, #ugM1 and #eFRN]**\\n\\n1. Reviewer #mqqR suggests adding GPT-4o, o1-mini and o1-preview for a comprehensive comparison, which we have included them in **Section 4.4 and Table 4** (highlighted in blue) of the revised paper. \\n2. Reviewer #ugM1 suggets adding GPT-4 Vector Search as an in-context learning baseline, which we have also included them in **Table 4** of the revised paper. These results further validate the superiority of LLaMoCo. \\n3. Reviewer #eFRN suggests adding some automatic algorithm selection methods as baselines. We have clarified the fundamental differences between LLaMoCo and algorithm selection methods, noting that direct comparison would be unfair. This explanation has been added in **Section 4.1, lines 349-354** (highlighted in blue) of the revised paper.\\n\\n**[Exploring LLaMoCo\\u2019s scalling law, Reviewer #mqqR and #1QNj]**\\n\\nReviewer #mqqR and #iQNj suggest exploring the impact of the model capacity and dataset size on LLaMoCo\\u2019s performance respectively. We hence provide an additional scaling law experiment on LLaMoCo in **Section 4.2, Table 2** and **Appendix E.3, Table 9**, where we find that both model capacity and dataset size play key roles in the final performance and the setting we adopt in the paper achieves best results. \\n\\n**[Evaluating LLaMoCo on real-world scenarios, Reviewer #mqqR and #ugM1]**\\n\\n1. Reviewer #mqqR requests the performance of LLaMoCo on realistic problems. We have clarified that results for eight real-world problems from diverse domains are already included in **Section 4.2, Table 3** of the original paper.\\n2. Reviewer #ugM1 suggests adding more realistic problem instances to further validate the generalization ability of LLaMoCo. Following the suggestion, we have included a comprehensive real-world problem collection of 57 engineering problems and update the results in **Appendix E.2, Table 8** (highlighted in blue) of the revised paper, which provide a clear evidence of LLaMoCo\\u2019s superior generalization performance. \\n\\n**[Highlighting novelties of LLaMoCo, Reviewer #1QNj and #eFRN]**\\n\\nFollowing the reviewers\\u2019 suggestions, we have discussed the challenges of fine-tuning general LLMs for optimization domain and the corresponding novelties of LLaMoCo to address these challenges. We have added this dicussion into our revised paper (Section 2, lines 125-128, 142-145, 169-173; Section 3, 181-187; all highlighted in blue) to highlight our novel methodology.\\n\\n**[Validating the ablation robustness, Reviewer #ugM1 and #eFRN]**\\n\\nReviewer #ugM1 and #eFRN suggest adding the ablation results on the proposed contrastive warm-up strategy on all combinations of the foundation LLMs and training problem sets. Following the suggestion, we have added these results in **Appendix E.5, Figure 6** of the revised paper, where the contrastive warm-up strategy consistently improves the instruction-tuning of LLaMoCo under various settings. \\n\\n---\\n\\nWe hope the above summary of our discussion with the reviewers could provide convenience for reviewers and AC to grasp all focused issues and all efforts we have made to address them.\"}",
"{\"title\": \"Response to Reviewer #mqqR (part 1/2)\", \"comment\": \"We appreciate the reviewer for recognizing our LLaMoCo as a significant contribution to adapting LLMs for optimization code generation, supported by our dataset as a valuable resource for future work, an innovative training methodology, and a comprehensive experimental analysis. We hope the following responses address the remaining concerns effectively.\\n\\n**[W1 & Q1, model size & performance discrepancy]**\\n\\nWe appreciate the observation! Firstly, we would like to clarify that larger models generally outperform smaller ones across Tables 1 to 3, indicating no overfitting under the given data scale. We acknowledge one exception: in Table 1, the medium model (LLaMoCo-M) outperforms the large model (LLaMoCo-L) on unconstrained optimization problems. This discrepancy likely stems from differences in the base models used during fine-tuning. Specifically, LLaMoCo-S, LLaMoCo-M, and LLaMoCo-L are fine-tuned on CodeGen-Mono (350M), Phi-2 (2.7B), and Code Llama (7B), respectively, demonstrating the generalizability of LLaMoCo across various backbone models. These base model differences may contribute to the observed performance variation. \\n\\nNotably, we actually have another table used to validate the scaling law of our LLaMoCo. Specifically, the experiment presented in Table 2 uses the CodeGen-Mono model series (ranging from 350M to 7B) serving as the base for LLaMoCo. The results indeed demonstrate that performance improves as model size increases, confirming that larger models achieve significant performance gains under our data scale. We present this table below for your convenience:\\n\\n| Model Size | 350M | 1B | 3B | 7B |\\n| --- | --- | --- | --- | --- |\\n| CodeGen-Mono | 15.341% | 18.943% | 19.348% | 20.982% |\\n| LLaMoCo-CodeGen | **81.843%** | **82.541%** | **83.315%** | **83.513%** |\\n\\nMoreover, we would like to clarify that while the results in the table may appear similar, they are based on a **normalized performance metric** (detailed in Appendix D), which scales optimization performance by the objective value range of the target problem instance. These ranges can be extremely large (e.g., $10^{20}$), causing performance differences to appear smaller than they are. For instance, in Table 1, the small model (LLaMoCo-S) achieves a performance of 81.843%, which seems \\\"similar\\\" to the medium model (LLaMoCo-M) at 83.369%. However, this 1.5% difference reflects a significant performance gap in absolute terms. The reason we normalize the optimization performance is that given various objective value scales in different problem instances, it is unreasonable to compute the absolute average final objective value across problem instances. Additionally, we believe that such normalization improves the table readability of the results.\\n\\n**[Q2, add more baselines]**\\n\\nThank you for the suggestion! Following your suggestion, we have tested three additional OpenAI models: **GPT-4o, o1-mini and o1-preview**, on our test dataset $\\\\mathbb{I}_{eval}$. Below, we provide a table comparing their performance with the GPT-4 Turbo baseline and our LLaMoCo mdoels: \\n\\n| Metric | GPT-4 Turbo | GPT-4o | o1-mini | o1-preview | LLaMoCo-S | LLaMoCo-M | LLaMoCo-L |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| Err. | 41.667% | 33.771% | **3.355%** | 4.107% | 5.580% | 5.434% | 5.509% |\\n| Rec. | 13.072% | 14.405% | **10.299%** | 10.641% | 10.826% | 10.349% | 10.461% |\\n| Perf. | 74.248% | 75.193% | 80.269% | 79.945% | 81.843% | 83.369% | **83.451%** |\\n| Comp. | 3.5k | 3.6k | 4.1k | 4.1k | 2.4k | 2.4k | **2.4k** |\\n\\nFrom the results, we observe the following: \\n\\n1. o1-mini/preview v.s. GPT-4o: The o1 models achieve significantly lower coding errors compared to the GPT-4o model, demonstrating their robust coding enhancement capabilities.\\n2. o1-mini v.s. LLaMoCo: On one hand, the error rate of o1-mini is lower than that of our LLaMoCo, primarily due to o1-mini's black-box training on an extremely large coding task. On the other hand, our LLaMoCo, despite being trained on a much smaller model, achieves greater optimization performance gains while consuming fewer tokens. Furthermore, we analyzed the source codes generated by o1-mini, as we did for the GPT-4 Turbo model in Section 4.4. It was found that o1-mini also tends to generate a specific optimizer, the DE algorithm, for nearly all tested problems. This observation reinforces the core motivation behind LLaMoCo, which is to explore how domain-specific knowledge can be effectively injected into large language models to adapt them for specialized scientific tasks.\\n\\nWe have updated the above discussion into our revised paper, Section 4.4 (colorred in blue). We kindly request the reviewer to examine it.\"}",
"{\"title\": \"Response to Reviewer #ugM1 (part 2/2)\", \"comment\": \"**[W3 & Q2, grid search criteria and correctness]**\\n\\n1. grid search criteria\\n \\n We kindly refer the reviewer to Section 3.1, lines 234-240 and Appendix A.2 for all details of our benchmarking criteria. In summary, we have selected 23 representative optimizer commonly used to solve different optimization problems and then applied a two-step procedure to identify the most effective optimizer for each problem instance in our instruction tuning set. Step 1: according to the configuration space we provided in Table 6 of the Appendix, we identify the best-performing configurations of the 23 optimizers for the given problem instances. Step 2: The optimizer that achieves the highest optimization performance by using the found best-performing configuration is labelled as the most effective optimizer for that problem instance. \\n \\n2. grid search correctness\\n \\n Following your suggestions about the grid search granularity we chose (as shown in Table 6 of the Appendix). We conduct two additional benchmarking processes, with half and double granularity of our original setting. For example, if a hyper-parameter holds four optional values in our setting: [0.2, 0.4, 0.6, 0.8], half granularity denotes [0.2, 0.8], double granularity denotes [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8], etc. We present the averaged optimization performance of the most effective optimizers on our problem set searched by these two granularities, normalized by our original granularity\\u2019s performance, as well as the averaged searching wall time for one problem instance in the following table:\\n \\n | | half | our setting | double |\\n | --- | --- | --- | --- |\\n | performance | 71.793% | 1 | 102.344% |\\n | wall time | 6s | 211s | 6379s |\\n \\n The results reveal an evident tradeoff between the searching effectiveness and the searching efficiency of different grid search granularities. The searching wall time increases exponentially since there are 4-5 hyper-parameters in an optimizer. However, the performance improvement obtained by spending so much computational resources is only 2.344%. We believe this result could validate the selection appropriateness of our grid search granularity. We have added the above discussion in Appendix E.4 of the revised paper (colorred in blue). We also have added some text content in lines 239-240 (colorred in blue) of the revised paper to guide readers for this discussion.\\n \\n\\n**[Q1, combined effect of SFT and contrastive alignment]** \\n\\nWe understand your concern since we only showcase the performance gain curve of our LLaMoCo-S on the left of Figure 3. We have added the same performance gain curves of the other eight scenarios (three models LLaMoCo-S/M/L and three test sets in Table 1). These curves are now presented in Figure 6, Appendix E.5 of the revised paper, where we can observe consistent learning enhancement by introducing our proposed contrastive learning warmup. We kindly request the reviewer to examine these figures. We have also added some text content in lines 475-476 of the revised paper to guide readers to review these results.\"}",
"{\"comment\": \"Thank you for your response. While part of my question has been addressed, I still have several concerns, three of which are as follows:\\n\\n1. In my understanding, code generated by a 350M parameter model is typically quite poor. There needs to be sufficient examples demonstrating its effectiveness and executability, as well as analysis of why it fails.\\n2. The improvement from 350M to 7B parameters appears relatively modest. What accounts for this? I'm particularly puzzled about what exactly the model is learning - is it merely learning the code format for problem-solving?\\n3. Could you please provide a breakdown of the frequency of each optimizer's occurrence in training and inference respectively?\"}",
"{\"comment\": \"Appendix D doesn't seem to provide an exact formula for the performance, but rather states each metric separately. So does it have an exact formula? (This was also my original Q2)\"}",
"{\"comment\": \"I appreciate the authors' detailed responses. However, I remain unconvinced that the paper adequately demonstrates \\\"unique technical challenges\\\" specific to the optimization domain. The presented novelties appear to be applications of existing approaches (instruction tuning, dataset construction, and contrastive learning) rather than truly domain-specific technical innovations. Moreover, while the authors argue that their contrastive warm-up technique is specifically tailored to optimization problems, the justification for this claim remains weak. Therefore, I maintain my original rating of 5.\"}",
"{\"comment\": \"In LLaMoCo, we calculate the Comp. term of our fine-tuned model and the baselines by counting the tokens consumed by a LLM to generate a complete optimizer code for the given problem prompt. In specific, we first use \\\"transformers.AutoTokenizer\\\" interface to initialize a tokenizer. Then for each problem instance $p_i$ in the testing set, we use the tokenizer to tokenize the problem prompt of $p_i$ and record the number of tokens in the token list as $N_{in}^i$. Once the LLM generates a complete optimizer code, we use the same tokenizer to tokenize the generated code string and record the number of tokens in the token list as $N_{out}^i$. Then the Comp. term is calculated as $Comp. = \\\\frac{1}{N_p} \\\\sum_{i=1}^{N_p}(N_{in}^i + N_{out}^i)$. We hope this elaboration could clear your concern.\\n\\nThank you for the precious time and valuable suggestion! We will update this exact formula for the Comp. term, as well as the above Perf. term into the revised paper.\"}",
"{\"title\": \"Response to your futher feedback (part 2/2)\", \"comment\": \"**[Q3, optimizer\\u2019s occurrence frequency]**\\n\\nWe provide following two tables that present the frequency of the 23 optimizers in our proposed advanced optimizer pool.\\n\\nThe frequency in training set.\\n\\n| **SAMR-GA** | **GA-TDX** | **Vanilla DE** | **DEAP-DE** | **HECO-DE** | **MadDE** | **AMCDE** | **Vanilla PSO** |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| 0.02% | 17.8% | 20.0% | 0.03% | 0.03% | 1.67% | 0.03% | 0.03% |\\n| **GLPSO** | **sDMS-PSO** | **DTPSO** | **SEP-CMA-ES** | **BIPOP-CMA-ES** | **MMES** | **Vanilla BO** | **LA-MCTS** |\\n| 2.78% | 0.80% | 4.40% | 0.02% | 13.5% | 0.02% | 0.02% | 0.05% |\\n| **SA** | **Dual Annealing** | **NSA** | **SLSQP** | **Trust-Constr** | **COBYLA** | **L-BFGS-B** | |\\n| 0.10% | 10.8% | 0.17% | 24.6% | 0.03% | 3.08% | 0.02% | |\\n\\nThe frequency in testing set.\\n\\n| **SAMR-GA** | **GA-TDX** | **Vanilla DE** | **DEAP-DE** | **HECO-DE** | **MadDE** | **AMCDE** | **Vanilla PSO** |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| 0.03% | 16.7% | 18.9% | 0.03% | 0.05% | 1.60% | 0.05% | 0.02% |\\n| **GLPSO** | **sDMS-PSO** | **DTPSO** | **SEP-CMA-ES** | **BIPOP-CMA-ES** | **MMES** | **Vanilla BO** | **LA-MCTS** |\\n| 2.55% | 1.05% | 4.72% | 0.05% | 14.6% | 0.02% | 0.03% | 0.02% |\\n| **SA** | **Dual Annealing** | **NSA** | **SLSQP** | **Trust-Constr** | **COBYLA** | **L-BFGS-B** | |\\n| 0.13% | 10.6% | 0.22% | 23.3% | 0.05% | 5.20% | 0.02% | |\\n\\nFrom the tables, we can observe that there is a data imbalance challenge (e.g., 20.4 % is Vanilla DE optimizer) considering instruction-tuning general LLMs for solving optimization problems in LLaMoCo, which motivates us to employ a balanced data sampling strategy (Section 3.2, lines 299-312) to aviod biased training. This further highlights the importance of LLaMoCo since it aims to generate most effective optimizer code not only for the majority problems that can be easily solved by popular optimizer such as GA, DE, CMA-ES and SLSQP, but also for uncommon problems which require specialized optimizer. Such optimization knowledge, as we discussed in Section 4.4, makes LLaMoCo not only more effective than general LLMs such as GPT-4 (it outputs DE or SLSQP for almost all tested problems), but also more friendly for users since when a user (with limited optimization knowledge) has an optimization problem to solve, LLaMoCo could automatically generate a specialized optimizer in an end-to-end style. In contrast, if this user prompts GPT-4, the answer is likely a DE optimizer, which might lead to poor performance for his or her optimization problem.\"}",
"{\"title\": \"Response to Reviewer #mqqR (part 2/2)\", \"comment\": \"**[Q3, out-of-distribution evaluation]**\\n\\nWe would like to clarify that the **out-of-distribution evaluation has already been conducted and is presented in Table 3**. The tested problems in this table come from diverse realistic scenarios with significantly different problem structures compared with the synthetic problems we used for training. The results demonstrate two key points:\\n\\n1. The LLMs fine-tuned by LLaMoCo exhibit superior generalization performance to other baselines on the out-of-distribution tasks. \\n2. The incremental performance trend among the three LLaMoCo models (S, M, L) in Table 3 is consistent with the trend in the in-distribution evaluation in Table 1.\"}",
"{\"title\": \"Response to Reviewer #eFRN (part 2/2)\", \"comment\": \"**[W2 & Q4, contrastive warm-up]**\\n\\nWe are honoured that the reviewer finds the contrastive warm-up component noteworthy enough to merit a standalone paper. However, we clarify that it is an integral part of LLaMoCo, tailored specifically for the optimization domain. In Section 3.2 (lines 262-269), we discuss two domain-specific challenges that necessitate the contrastive warm-up to facilitate the subsequent SFT. \\n\\n1. In the optimization domain, the number of optimization problems is far more than the number of optimizers. Hence, multiple problems can share the same effective optimizer despite differing descriptions. Without a contrastive warm-up to align the hidden embeddings, the model struggles to generalize well across these cases. \\n2. Minor changes in the problem descriptions could significantly alter the optimization properties, resulting in different effective optimizers. The contrastive warm-up pulls the hidden embeddings of these similar but distinct problems apart, enhancing the model\\u2019s ability to distinguish them. \\n\\nWhile we admit the idea of contrastive warm-up could potentially be applied to other domains with similar alignment issues, this should not be a criticism point of our paper. On the contrary, this generalizability reflects the novelty and broad impact of our proposal! Note: we have added the above discussion into the revised paper, Section 5, lines 533-538 (colorred in blue), as a promising future direction for potential practitioners. \\n\\nBesides, we understand your concern that we only showcase the effectiveness of the contrastive warm-up of LLaMoCo-S on $\\\\mathbb{I}_{eval}$ in the ablation studies (Figure 3). We have expanded these to include performance gain curves for eight more scenarios (three models\\u2014LLaMoCo-S/M/L\\u2014and three test sets from Table 1). These new results, presented in Figure 6, Appendix E.5 of the revised paper (colorred in blue), consistently demonstrate the learning enhancement through the contrastive warm-up. We kindly request the reviewer to examine these figures.\\n\\n**[Q1, leverage CoT & multi-turn code optimization conversations]**\\n\\nWe clarify that under the motivation and vision of LLaMoCo, the current dataset format seems to be optimal. To be specific, the core motivation of LLaMoCo is to provide an alternative way of leveraging LLMs for the optimization domain which could achieve superior optimization performance with minimal computational resources. To this end, we design the dataset format as the prompt-answer pair, which facilitates both end-to-end training and inference. Once trained, LLaMoCo generates the desired optimization codes in a single conversation turn, directly supporting this motivation.\\n\\nWhile we acknowledge that constructing a CoT-based training dataset could potentially enhance LLaMoCo's generalization through reasoning dynamics, this approach diverges from our primary goals and requires significantly more human efforts and costs: a) it would significantly increase token consumption during inference, and b) constructing such a dataset would require substantially more domain expertise, whereas our current dataset is fully automated, generated through benchmarking and grid search. We hope this explanation clarifies our design choice. Besides, while we acknowledge that leveraging multi-turn code optimization might potentially enhance the performance of the generated optimizers, it comes with notable drawbacks. First, it requires users to provide additional feedback and guidance to the LLMs, which demands a certain level of expertise. Second, multi-turn code optimization consumes significantly more computational resources (e.g., token usage), making it far less efficient and economical compared to LLaMoCo's single-turn approach.\\n\\nWe thank the reviewer for such valuable comments. We have added the core part of the above discussion into the revised paper, Section 5, lines 533-538 (colorred in blue), as promising directions for further development.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Request for further feedback\", \"comment\": \"Dear reviewer #eFRN, since the discussion period is extended, we respectifully request you for futher feedback on our response to youe newly posted comments. Is there specific experimental analysis we should conduct to further address your concerns? We are open to any suggestions from you. Thanks for your precious time!\"}"
]
} |
EK1yOLL7GA | Tokens on Demand: Token Condensation as Training-free Test-time Adaptation | [
"Zixin Wang",
"Dong Gong",
"Sen Wang",
"Zi Huang",
"Yadan Luo"
] | In this work, we introduce Token Condensation as Adaptation (TCA), a training-free approach designed to mitigate distribution shifts encountered by vision-language models (VLMs) during test-time inference. TCA bridges distribution gaps at the patch level by condensing image tokens that exhibit low attentiveness to the <cls> token. Recognizing the <cls> token may correspond to universal concepts, TCA identifies and tracks the most reliable <cls> tokens that align specifically with target classes from historical data streams. To achieve this, we propose a context token reservoir (CTR), which retains tokens with the lowest uncertainty as ``anchors" to guide the preservation of class-relevant tokens during inference. These anchors, in turn, act as token-level classifiers to correct VLM predictions and improve visual-text alignment. Utilizing anchors sampled from CTR, TCA condenses tokens through two operations: (1) pruning class-irrelevant tokens that consistently rank low across all attention heads to reach cross-head consensus on their irrelevance, and (2) merging the remaining class-ambiguous tokens into representative centers using coreset selection, maintaining linear computational complexity. As the first method to explore token efficiency in test-time adaptation, TCA consistently demonstrates superior performance across cross-dataset and out-of-distribution adaptation tasks, reducing GFLOPs by 12.2\% to 48.9\% while achieving accuracy improvements up to 21.4\% against the strongest baseline without introducing additional parameters. | [
"test-time adaptation"
] | https://openreview.net/pdf?id=EK1yOLL7GA | https://openreview.net/forum?id=EK1yOLL7GA | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"sUDjiWR2RV",
"btEvwSSb91",
"aa1fhh8o1I",
"CCqPbIufB6",
"3biARekY37"
],
"note_type": [
"official_review",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1730347282969,
1731459408641,
1730537549409,
1729510133970,
1730658457851
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission2307/Reviewer_8qmQ"
],
[
"ICLR.cc/2025/Conference/Submission2307/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2307/Reviewer_vH1N"
],
[
"ICLR.cc/2025/Conference/Submission2307/Reviewer_cmcN"
],
[
"ICLR.cc/2025/Conference/Submission2307/Reviewer_stTV"
]
],
"structured_content_str": [
"{\"summary\": \"The authors proposed Token Condensation as Adaptation, a token pruning and merging method that operates at test-time on the ViT blocks in VLMs. Token condensation was previously framed as a way to reduce computational burden for vision transformers by either removing unimportant tokens or merging similar tokens. The authors, on the other hand, proposed to use token condensation as a measure to address multimodal distribution shifts by attenuating the effect of irrelevant and ambiguous tokens.\\nThe authors showed through experiments that their method both reduces GFLOPs and improves performance. The experiments include results on a wide range of datasets. The baselines are rather exhaustive. Results are somewhat promising.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The idea of using token condensation for improving distribution shift is novel and encouraging. The authors provided good intuition to how their method works. The visualizations of the three types of tokens are generally convincing.\\n2. Experiments cover a wide range of datasets and settings, although the strength of the method was not reflected in all of them.\", \"weaknesses\": \"1. Most of the discussions are centered around CLIP. The reviewer believe the paper would be more impactful if the authors could share thoughts on extending their method to other settings, such as integration with recent open-source large VLMs. Despite the novelty presented in the paper, test-time adaptation on CLIP is already a rather extensively-studied area. If CLIP is the only experimental setting, it would be questionable if TCA could bring significant impact to the VLM research community.\\n 1. For the reason above, the paper\\u2019s abstract and introduction sections contain rather too ambitious writings (compared to the conclusion section where the authors downgraded their method to yet another \\u201ctest-time adaptation method for CLIP models.) These sections need a major revision. \\n2. Experimental results are weak in the OOD setting. The authors provided brief explanation for this in the appendix; however, I believe this would not be sufficient because the proposed method was supposed to counter distribution shift. The paper needs to address the problem of how their method, possibly while work together with another method, could bring significant improvement in the OOD setting. If we think about a industry user or future researcher of CLIP, is such performance improvement sufficient evidence to justify using the TCA method? Considering that TCA requires a major change of the code and may not necessarily accelerate computation.\\n3. Results in table 1 show high unevenness across datasets. Any possible causes? \\n\\nThe reviewer is somewhat satisfied with the results already presented in the paper, but have concerns on its application and impact. I look forward to the authors\\u2019 response.\", \"questions\": \"I would like to learn more about how the implications of those GFLOPs reductions from the proposed method. For example, how does it affect the latency of the model? How does the Context Token Reservoir affect the latency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"The author introduces a new method named Token Condensation as Adaptation (TCA) to address distribution shift issues faced by Visual Language Models (VLMs) during test-time reasoning. TCA tackles the visual distribution shift problem by compressing image tokens that receive minimal attention from the <cls> token. The author proposes a contextual token reserve to identify and track the most reliable <cls> tokens from historical data streams aligned with the target category. CTR retains tokens with the lowest uncertainty guiding the retention of category-related tokens during reasoning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well written and easy to follow.\", \"weaknesses\": \"1. The novelty is limited. Using the [cls] token to select tokens is not a novel approach, as it has already been validated in many vision methods[1]. Therefore, I cannot recognize it as an innovation.\\n\\n2. In Table3, The TCA surpasses ToMe by a very little margin. This does not sufficiently highlight the advantages of TCA. \\n\\n3. The scope of tasks is too limited. The author only validates TCA on classification tasks, while VLMs encompass many other tasks. Rather than being a broad enhancement for VLMs, the method seems more like an improvement specific to CLIP's vision encoder. Could the author validate the effectiveness of the proposed method on a broader range of tasks?\\n\\n[1]EViT: Expediting Vision Transformers via Token Reorganizations, ICLR2022\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This post introduces a new method called Token Condensation as Adaptation (TCA) that aims to address the distribution shift problem encountered by Visual Language Models (VLMs) during test-time reasoning. TCA addresses the problem of visual distribution shift by compressing image tokens that have little attention from the <cls> token. TCA proposes a contextual token reserve (CTR) to identify and track the most reliable <cls> tokens in the historical data stream that are aligned with the target category. CTR retains tokens with the lowest uncertainty as \\\"anchors\\\" to guide the retention of category-related tokens during reasoning. At the same time, these anchors in turn act as token-level classifiers to correct the prediction results of the VLM.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method can reduce the computation overhead of ViT.\", \"weaknesses\": \"1. The structure of the article needs further optimization.\\n2. Some concepts lack necessary explanation.\", \"questions\": \"1. The structure of the article needs further optimization:\\n\\n a) In lines 71-72, there is a lack of explanation for the experiment in Fig. 2, and when reading this part, it is difficult to understand how the authors reached their conclusions. From Fig 2(a), we can only see that different tokens have different responses to <cls> token. Similarly, without explaining clearly how the anchor token is obtained and used, the author explains the role of the anchor in lines 83-84. These make reading difficult.\\n\\n b) The summary of the method proposed in the paper in the introduction is too abstract, making it difficult for readers to grasp the actual method proposed by the author.\\n\\n c) The best resutls in Tab.1 and 2 should be highlighted.\\n\\n2. The concept of visual shift needs further clarification. What is the connection between the proposed method and domain shift? Why can ``visual shift'' be reduced by removing task-irrelevant features? In my opinion, the method proposed by the author only eliminates interference by removing task-irrelevant features.\\n\\n3. The method proposed by the authors is related to the order of the input, and related ablation experiments are required, such as the impact of the shuffle test set on performance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents Token Condensation as Adaptation (TCA), a training-free method designed to handle distribution shifts in vision-language models (VLMs) during test-time inference. TCA condenses image tokens by focusing on those with significant attentiveness to the <cls> token. The method uses a context token reservoir (CTR) to store tokens with low uncertainty as anchors, which help retain class-relevant information and guide inference. The results show that TCA enhances cross-dataset and out-of-distribution adaptation performance, reducing computational requirements.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The flow of the argument is solid and novel.\", \"Specifically, the finding that not all visual patches are useful for visual representation and taking this motivation to exclude these from the test-time adaptation makes sense and is interesting.\", \"Also, finding a better <cls> token that only attends to its target class instead of other concepts is an exciting idea grounded in logical reasoning.\", \"The biggest hurdle of previous Test-time adaptation methods for CLIP is their heavy computational cost (augmentation and backprop time). Although the proposed method does not necessarily beat the performance of previous Test-time methods for CLIP, the proposed method is augmentation-free and thus significantly reduces the computational cost needed, which is valuable for the test-time use case.\", \"The paper is formatted well with good figures.\"], \"weaknesses\": [\"In Figure 2-(a) x-axis, state the ordering of the sorting (e.g., highest to lowest)\", \"State what the different colors mean in the figure with legends or in the figure caption.\", \"In the Ablation study, the authors showed the impact of the different $\\\\beta$ and $\\\\lambda$ terms on their performance. What values were used for the main experiments in Table 1 and Table 2? Also, since these are hyperparameters, wouldn't the authors' claim in line 375 that the proposed method is hyperparameter-free be wrong?\", \"The conclusions drawn in this work are empirical rather than theoretical. While basing a method on empirical observations is valid, the absence of theories somewhat decreases the overall contribution of the research.\"], \"questions\": \"See weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EJgxMsiAO9 | Alice in Wonderland: Simple Tasks Reveal Severe Generalization and Basic Reasoning Deficits in State-Of-the-Art Large Language Models | [
"Marianna Nezhurina",
"Lucia Cipolina-Kun",
"Mehdi Cherti",
"Jenia Jitsev"
] | Large Language Models (LLMs) are often described as being instances of foundation models - that is, models that possess strong generalization and therefore transfer robustly across various tasks and conditions in few-show or zero-shot manner, while exhibiting scaling laws that predict generalization improvement when increasing the pre-training scale. These claims of strong generalization and advanced reasoning function enabling it rely on measurements by various standardized benchmarks where state-of-the-art (SOTA) models score high. We demonstrate here a dramatic breakdown of generalization and basic reasoning of all SOTA models which claim strong function, including advanced models like GPT-4 or Claude 3 Opus trained at the largest scales, using a simple, short common sense problem formulated in concise natural language, easily solvable by humans (AIW problem). The breakdown is dramatic as it manifests in both low average performance and strong performance fluctuations on natural problem variations that change neither problem structure nor its difficulty, while also often expressing strong overconfidence in the wrong solutions, backed up by plausible sounding explanation-like confabulations. Various standard interventions in an attempt to get the right solution, like chain-of-thought prompting, or urging the models to reconsider the wrong solutions again by multi step re-evaluation, fail. We take these observations to the scientific and technological community to stimulate re-assessment of the capabilities of current generation of LLMs as claimed by standardized benchmarks. Such re-assessment also requires common action to create standardized benchmarks that would allow proper detection of such deficits in generalization and reasoning that obviously remain undiscovered by current state-of-the-art evaluation procedures, where SOTA LLMs obtain high scores. Code for reproducing experiments in the paper and raw experiments data can be found at https://anonymous.4open.science/r/AITW_anonymous-69A6/ | [
"large language models",
"foundation models",
"generalization",
"reasoning",
"function testing",
"evaluation",
"benchmarks",
"robustness",
"function breakdown"
] | Reject | https://openreview.net/pdf?id=EJgxMsiAO9 | https://openreview.net/forum?id=EJgxMsiAO9 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zFFO9teFBZ",
"zE1k7Q26GO",
"xcJVqPTU0h",
"x0vWIjfqQl",
"tKfvunFRe5",
"pRsY5svRFD",
"o2sW0nySLo",
"nh09UcG8fW",
"lGYX59gHFa",
"koglmve69N",
"g2STmJZOf9",
"dJqGzMj22R",
"ZKQuUPqYFF",
"TeD5XQcNM7",
"SFPh0ErEGw",
"RWsnxyCozS",
"RHu2HDr009",
"RB2NS19TxT",
"NICQ7bqM4K",
"MZ3Ia1tDAs",
"LqlViak30K",
"K0oej2WPDh",
"JjfYJG4uVR",
"JTbV0s11vf",
"HvJTSvlzc4",
"GsySr9W3BL",
"ETuVzzjKX8",
"DVsMwKMOx9",
"DAj8c85JNT",
"D2OUvdPcYB",
"6FTixqtSMm",
"5QzASEQW3P",
"53B6fbBe0b",
"4tvLsE978e",
"4QTiI2LUuj",
"4718F2ciH5",
"3wXnb5zQF7",
"1qyS0r02Oo",
"143c24nEmI",
"0s2omYQByi"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_review",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1730566528355,
1732250327666,
1733229613647,
1732606957978,
1732195691788,
1730537910771,
1732066530787,
1732638057110,
1732195805992,
1732752130022,
1731883471062,
1731853441650,
1732607471485,
1732803081925,
1732630732907,
1732546777869,
1732607368358,
1733095706469,
1737523894736,
1732660193023,
1730677692186,
1735026022836,
1730658811241,
1731837699941,
1732546293326,
1732250711286,
1732666146851,
1732799536805,
1732026627144,
1732802976953,
1732195953418,
1732272500120,
1732545963862,
1732980060292,
1730163009925,
1732546936527,
1732066758548,
1732195421913,
1732666683128,
1732026970447
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_NZax"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_5S6M"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_3mYn"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_5S6M"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_7c6D"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_3mYn"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_hx2d"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_3mYn"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_7c6D"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_7c6D"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_7c6D"
],
[
"ICLR.cc/2025/Conference/Submission8214/Area_Chair_Np52"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_3mYn"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_NZax"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Reviewer_hx2d"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8214/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"This paper tests SOTA LLMs\\u2019 reasoning abilities by testing them with a bunch of variants of the AIW problem.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The tests are across different SOTA LLMs.\", \"weaknesses\": \"1. The whole paper is about one type of questions: \\u201cAlice has N brothers and she also has M sisters. How many sisters does Alice\\u2019s brother have\\u201d. I personally feel like it is hard to judge model\\u2019s capabilities based on one type of question alone. Model\\u2019s generalization and reasoning abilities are maybe on a spectrum and with only one question, it is hard to tell where the model falls on this spectrum.\\n\\n2. GPT-4o has superior performance, possible suggesting this might have to do with model size or training? \\n\\n3. I don\\u2019t consider \\u201cfemale boost\\u201d as totally redundant information. For one thing, if you are testing the model\\u2019s reasoning abilities, it should disentangle model\\u2019s syntactic understanding as a separate thing. \\u201cShe\\u201d as a sole indicator of Alice being a female is more a syntactic problem, which shouldn\\u2019t part of model\\u2019s burden if one\\u2019s goal is simply to test reasoning abilities. \\n\\n4. Personally, I feel like this paper adds nothing significantly interesting to the existing discussion on whether LLM can reason or generalize. For one thing, a pure test on reasoning should not rely much on extra knowledge. The test in the paper (AIW) needs model\\u2019s understanding of \\u201cshe\\u201d as Alice (syntactic) and basic family structure (external knowledge). The actual reasoning, on the other hand, is in my opinion, perhaps not the main bottleneck. This is also supported in the paper where \\u201cfemale boost\\u201d variants can improve performance.\", \"questions\": \"1. For figure 1, what do the numbers like 55, 56, 63, 69 mean?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for your detailed response. I will maintain my score.\"}",
"{\"comment\": \"Dear reviewers,\\n\\nwe would like to thank you all again for time and involvement in the discussions.\\n\\nAs the discussion time nears its end, we would like to encourage you to go through discussions, and should those be insightful and resolve some of the concerns, to consider reflecting this in the scores.\\n\\nWe would like to highlight again additional experiments performed in the rebuttal period upon reviewers' request:\\n\\n- Experiments with modified problem templates that differ from AIW Original. For instance, we either introduce Alice and Bob as two entities in the problem structure to deal with, or we replace brothers and sisters entities with male/female friends, abandoning family specific frame. Using same experimental procedure to create variations of these problem versions, we observe the same pattern as for the AIW original, especially the strong fluctuations across variations, confirming the existence of the same generalization deficits for further problem examples (Fig. I https://hackmd.io/_uploads/BJ1nqj3MJx.png)\\n\\n- An experiment concerning an AIW version where numbers for brothers and sisters are instantiated to be in an exaggerated range not realistic for a typical family scenario. In this AIW version, offset 60 is added to numbers in AIW original problem. We observe the same pattern as in AIW original (Fig. H https://hackmd.io/_uploads/H1sLdj2Mye.png) - low correct response rates and strong fluctuations across variations. We also see slightly lower correct response rates on average compared to AIW original. This might point to further generalization deficits becoming apparent when dealing with numbers outside the expected problem specific range, despite problem structure left unchanged.\\n\\n- An illustrative example of a debunking procedure using the recent case of NuminaMath-7B that was ranked 1st at the recent AIMO competition, solving 29/50 private set problems of olympiad math level (https://huggingface.co/AI-MO/NuminaMath-7B-TIR). Based on that evals, the claim was widely put forward that the model is capable of solving high school olympiad math problems (https://www.aimodels.fyi/models/huggingFace/numinamath-7b-tir-ai-mo). AIW problem has average elementary school level and does not require any advanced math knowledge. We tested NuminaMath-7B on AIW and observed a strong collapse of this model on AIW problem, with correct response rates close to 0 across AIW variations 1-4. Using AIW Light control problems, we can also see that NuminaMath-7B can handle all the low level operations and knowledge required to deal with family structure, ruling out that those are the issues. Using the AIW problem setting, we thus can contradict the strong claim of being capable to robustly deal with math problems (Fig F, https://hackmd.io/_uploads/SybG2hqz1x.png). Especially, breakdown in such a simple setting rules out that model will be able to deal robustly with olympiad level math tasks, debunking the claim and raising further questions about AIMO benchmark procedure used to measure model performance.\\n\\nFor the collected data for this debunking experiment, see anonymous repo https://anonymous.4open.science/r/AITW_anonymous-69A6/collected_responses/raw_data_inspection/NuminaMath-7B_AIW_versions_problem_set_checked.json\\n\\nWe hope this together amounts to convincing evidence that experimental setup we describe in our work is useful for community as reproducible measurement method for systematically probing models' generalization using structure preserving variations of simple problem setting. The method is also useful for falsification of the strong function hypothesis, to debunk overblown claims that rely on benchmarks which overlook such clear deficits as revealed by the AIW problem and its variations. We also hope that study provides impulses and clear roadmap to create novel benchmarks that use problem structure \\\\& difficulty preserving perturbations to stress test model generalization systematically, so that such deficits do not remain hidden, also allowing for measurable progress in improving model generalization capability.\"}",
"{\"title\": \"Response to comment 1\", \"comment\": \"I only agree partially with the second point mentioned above. The analogy with adversarial robustness is not entirely correct, I agree, but the notion of adversarial trigger is not necessarily connected with a variation of existing input points. See [1], for example.\\nAs regards the first point, adding experiments with illustrations can cause the model to find a 'shortcut': nonetheless, it would have been an interesting observation to make. But I understand your motivation.\\n[1] Universal Adversarial Triggers for Attacking and Analyzing NLP\"}",
"{\"title\": \"Rebuttal Continuation, 2\", \"comment\": \"> The paper lacks a consistent analysis of other examples, and, in this form, it reduces the contribution to an exciting yet anecdotal showcase of failure.\\n\\nWe respectfully disagree with notion that failures that we study are anecdotal. Contrary to various indeed anecdotal examples, we conduct systematic evaluation of correct response rates of various SOTA models across variations while controlling variation source (Fig 1,2,6). We also execute control experiments to rule out various low level issues as source of observed failures and fluctuations (Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png; Fig 3,4,5). The failures and exact full statistics of model behavior can be reproduced following our approach of using fixed problem template while introducing problem compatible variations, which lead to strong fluctuations. As our main goal was to falsify the still widespread hypothesis stating current SOTA LLMs possess robust generalization and reasoning, which is usually backed up by current standardized benchmarks (for bench failures, see Fig. 7, Suppl. Fig. 16,17,18,19) , we searched for a minimal sufficient problem setting that will allow such falsification. AIW problem that is made so simple that arguably elementary school children can easily handle it (Fig. A https://hackmd.io/_uploads/HJULH2IG1l.png, Suppl. Tab. 2), allowed us to obtain such evidence falsifying the strong function claim, as it would be expected from models with even basic generalization and reasoning capabilities to solve AIW without strong fluctuations with correct response rates close to 100% across all variations. Given this clear evidence, no further other examples were necessary.\\n\\n> Plenty of articles show failure cases of LLMs on many examples and variations; one very popular is [1]. \\n\\nIn line with preceding response, to our knowledge, ours is the first study that systematically quantifies severe lack of model robustness pointing to generalization deficit in a simple, well reproducible scenario, gathering statistics executing experiments across many repeated trials (at least 30 per each variation and prompt type combination, Suppl. Fig. 20), controlling for conditions and source of variations, obtaining evidence consistently across many SOTA LLMs including the most advanced large scale ones, and using control experiments to rule out various low level causes like natural language/numbers parsing, tokenization, problem specific ambiguities, failures to access specific knowledge (Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png; Fig 3,4,5). \\n\\nWe would also like to emphasize that variations we use are not rooted in prompt engineering or other arbitrary problem modifications that put model into vastly different operation modes - we make use of variations in a simple problem template that are natural part of problem instantiation (Fig. A https://hackmd.io/_uploads/HJULH2IG1l.png, Suppl. Tab. 2) and do not change problem structure, its difficulty or aim to heavily alter processing mode in general (like done in prompt engineering). Our approach makes it thus possible to draw from collected evidence about model sensitivity conclusions about generalization deficits, contrary to various previous anecdotal examples where such systematic measurements were not performed, one or only few models were used, control experiments were not executed, problem setting was ambiguous or prone to low level issues like tokenization, findings were hard to reproduce and thus no clear conclusions about conditions, extent and nature of model failures were possible. \\n\\nWe think the work done in [1] (https://arxiv.org/abs/2302.08399) is exactly of this \\\"anecdotal\\\" kind - it takes single model (GPT-3.5), goes through selected cases of failure without executing many trials under well controlled conditions to gather statistics on average failure rates related to various conditions, so that it is indeed impossible to conclude whether failures were due to specific problem type, input presentation, prompt type etc, or indeed due to deficits in core model function, such that those examples remain in anecdotal realm. One of motivations behind our work was to depart from anecdotal failure reports and do systematic evaluation study showing how such failures manifest in reproducible manner (all data available in the repo https://anonymous.4open.science/r/AITW_anonymous-69A6/README.md) and whether they indeed relate to core generalization deficits in al SOTA LLMs, which we hope to have succeeded in.\"}",
"{\"summary\": \"This paper demonstrate a dramatic breakdown of generalization and basic reasoning of all SOTA models (ncluding advanced models like GPT-4 or Claude 3 Opus) which claim strong function, using a simple, short, conventional common sense problem formulated in concise natural language (AIW problem). The authors observe that large language models (LLMs) exhibit significant performance fluctuations on simple problems across minor variations that should not impact problem-solving ability at all. Additionally, various standard interventions, such as chain-of-thought prompting, failed to yield correct solutions in the AIW problem. These observations highlight the need to re-evaluate the claimed capabilities of the current generation of LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is well-written and presents clear ideas.\", \"The authors conduct extensive experiments on over 36 LLMs to demonstrate the breakdown of SOTA LLMs on the somple AIW problem.\", \"Fully Open-sourced code and data to reproduce the result.\"], \"weaknesses\": [\"The problem setting of AIW has certain interfering factors. After some attempts, I found that the main reason LLMs perform poorly on AIW origin is due to easy thinking. For example, \\\"Alice has 3 brothers, and each of these brothers has the same sisters, who are Alice's sisters. Alice has 6 sisters, so each of her brothers has 6 sisters.\\\" The issue here is that the LLM overlooks counting Alice herself, rather than lacking reasoning ability. I believe that testing similar problems in mathematical reasoning tasks (like GSM8K or MATH) would be more convincing.\", \"Quite a few typo errors. line 016, few-show -> few-shot; format of most of the citations is wrong.\"], \"questions\": \"See weaknesses.\\n\\nRecently, I came across another paper that is similar in content to this study. I know this paper was published online prior to [1], I'm just curious about what advantages the authors believe the AIW dataset presented in this paper has compared to [1], which focuses on the mathematical reasoning task.\\n\\n[1] GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We thank the reviewer for taking the time to deal with our work. We appreciate feedback on its positive aspects emphasizing interesting finding of model breakdown, extensive experiments and analysis involving a large set of tested models. We also appreciate further points raised, which we address in following:\\n\\n> The study while interesting and definitely highlights a problem with modern LLMs is quite limited by the actual test set (basically being based on a single question and variations thereof).\\n\\nMain goal of our paper is to convincingly falsify the hypothesis stating current SOTA LLMs possess robust generalization and reasoning, which is backed up by current standardized benchmarks. Following the scientific method to obtain such falsification, we search for a \\u201cminimal\\u201d experiment/problem, as simple as possible to provide sufficient evidence for hypothesis falsification. AIW problem, despite being a specific problem, together with the developed technique to measure models\\u2019 sensitivity to variations introduced into the problem template without changing problem structure or difficulty, satisfies this requirement (Fig. A https://hackmd.io/_uploads/HJULH2IG1l.png, Suppl. Tab. 2). For us, it is surprising to observe i) low correct response rates (Fig. 1, Suppl. Fig. 8) and even more importantly ii) strong performance fluctuations (Fig 2, 6) across AIW variations that do nothing but only change numbers N, M in such a simple problem setting. Models claiming robust zero-shot generalization and basic reasoning should have been able to solve AIW variations without fluctuations with correct response rate close to 100%. Together with control experiments that rule out failures in low-level issues like natural language/numbers parsing or handling specific knowledge about family structure, the evidence we obtain by using AIW problem setting is thus sufficient to disprove strong function hypothesis and also reveal generic generalization deficits beyond specific AIW problem scenario that are not detected by standardized benchmarks (Fig 7, Suppl. Fig. 16,17,18,19). \\n\\n> Finding specific phrasings that break a model is very common to most problems and to most people that have done prompt engineering. \\u2026 single examples that work poorly and then others that work well \\u2026 very common for most tasks\\n\\nIn our work, we do not deal with finding specific phrasings or formulations that break models. On the contrary, we intentionally design AIW problem formulation to be simple, short and unambiguous to focus on the effect of variations built into the template, which are as well constructed to be simple, natural variations (Fig. A, Suppl. Tab. 2). We think it is the most intriguing part of our work that strong fluctuations observed in all tested SOTA LLM are not caused by prompt engineering (which is indeed well known), but appear across problem variations that change neither problem structure nor its difficulty, observed behavior being consistent across various prompt types (Fig. B https://hackmd.io/_uploads/rJPFH38Gyl.png; paper Fig 2,6). The examples that work poorly and examples that work well differ in nothing else than instantiated numbers in the template, which allows conclusion about severe lack of robustness and generic generalization deficits. To our knowledge, ours is the first study that systematically quantifies such lack of robustness in a simple scenario, seeing all SOTA LLMs exhibiting this phenomena and using control experiments to rule out other causes. This approach makes it possible to draw conclusions about generalization deficits, contrary to various previous anecdotal examples where such systematic measurements were not performed, findings were hardly reproducible and no clear conclusions about extent and nature of model failures were possible. \\n \\n> \\u2026 analysis should be much more detailed in terms of how and why models perform poorly or well on these tasks \\u2026 \\\"Physics of Language Models\\\" [line of work] provides excellent analysis of how model perform and why they generalise poorly.\\n\\nWhile we agree that looking into various types of model breakdowns and their causes is very fruitful in general, we would like to note that science is an iterative process. We hope to have made with our work one important iteration, discovering a minimalistic AIW problem and a technique to measure performance fluctuations across variations in the problem template that do not change problem structure or difficulty that allowed us to obtain clear evidence for severe generalization deficits in all SOTA LLMs (https://hackmd.io/_uploads/rJPFH38Gyl.png; paper Fig 2,6), contrary to previous claims. Follow up work can be to take this newly obtained evidence and conduct systematic studies elucidating origins of the fluctuations (where it helps that our control experiments could rule out low level issues; Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png; Fig. 3,4,5), which in turn may give hints what is required to get robust generalization.\"}",
"{\"comment\": \"We thank the reviewer for taking the time and involving intensively into discussion. We appreciate raising the score following our initial rebuttal. In following, we would like to iterate on points raised:\\n\\n> I still hold the opinion, shared by most reviewers in this batch, that one example is not sufficient to say much about a model's capabilities \\u2026 my biggest concerns still hold (one example is not sufficient to show any lack of reasoning \\u2026)\\n\\nWe think that there is still ongoing misunderstanding among reviewers when pointing out we deal with a single example only of what the study\\u2019s main goal is. Main goal of the study is to provide convincing falsification of the hypothesis that posits strong zero-shot generalization and robust basic reasoning exhibited by current SOTA LLMs (let\\u2019s call it \\u201cstrong function hypothesis\\u201d). To falsify the hypothesis, following scientific method we construct a minimal experimental setup - as simple as possible to perform this duty - that gives us sufficient evidence to reject the strong function hypothesis. \\n\\nThus, we do not aim to test all the various abilities of the models. We aim to test whether the claim of strong function holds, to falsify which it is sufficient to present one clear contradiction backed up by proper systematic measurements. AIW problem and measurement procedure using its variations (see Fig. K for overview https://hackmd.io/_uploads/rkPA9H-myg.png) have two crucial properties for providing this contradiction and thus falsification:\\n\\n1. It is a very simple, unambiguous, common sense math problem, arguably simple enough to be solved by elementary school children. Any system that claims to have robust problem solving capabilities - and current advanced SOTA LLMs (e.g GPT-4, Claude 3 Opus) maintain quite a strong claim to be able to deal with math problems of graduate student level ! - should be able to solve such a simple problem with correct response rates close to 100% if measured across many trials.\\n\\n2. It uses natural variations around fixed simple problem template to systematically probe models\\u2019 sensitivity to problem perturbations that keep problem structure and its difficulty unchanged. Controlling for source of variations in this way, it is possible to make conclusions about models\\u2019 generalization from the observed robustness to those variations in a simple scenario. Again, any system capable of robust generalization should be able to handle the problem independent of such \\u201cnatural\\u201d variations, showing strong performance on each variation as the problem structure is unchanged and so problem solving should be not affected. We would like to emphasize again that variations belong naturally to the problem, being mere number instantiations in the problem template (Fig. K https://hackmd.io/_uploads/rkPA9H-myg.png). Variations are not adversarial variations tweaked specifically to break the models, or prompt engineering like variations to tune the model performance. On the contrary, variations should keep the problem and model operation mode unchanged.\", \"we_obtained_following_evidence\": \"1. Overall low average correct response rates, averaged across many trials (> 30 trials for each combination of AIW variations 1-4 and 3 various prompt types, STANDARD, THINKING, RESTRICTED) (Fig. K; paper Fig. 1)\\n\\n2. Strong fluctuations across variations (Fig. K; paper Fig. 2,6) - despite each variation being instance of the very same problem, with the only difference between variations being the instantiated numbers, performance can vary wildly from high to low, also for most advanced tested models like GPT-4 and Claude 3 Opus. \\n\\nWe thus think this clear evidence from experiments using single problem type is indeed sufficient to debunk strong function claim and to point to generic lack of reasoning - reasoning about problem structure that would allow robust problem structure inference and would enable robust generalization despite problem irrelevant variations. As pointed out in previous discussions, it is otherwise impossible to explain strong performance fluctuations observed across variations - assuming any operation or set of operations necessary to solve AIW being broken, performance should have been affected equally across variations, as those pose the very same problem. Via control experiments, we also provide additional evidence to rule out that the observed breakdown is due to failures in executing low level operations specific to AIW (Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png)\"}",
"{\"title\": \"Rebuttal Continuation, 3\", \"comment\": \"> The authors do not give a reasonable rationale behind why LLMs fail on the AIW problem.\\n\\n\\nOur work shows that all SOTA LLMs suffer from lack of robustness pointing to severe generalization deficits. Why generalization deficits exist in current SOTA LLMs is a tough question to answer, we hope to have made one step in direction to clarify it by developing a method and with AIW problem a tool that actually allows to measure existence of such deficits, contrary to existing standardized benchmarks (Fig 7; Suppl Fig 16,17,18,19). Our control experiments show further how to rule out other low level issues as potential causes of observed breakdowns (Fig E https://hackmd.io/_uploads/ByCpjM9M1x.png; Fig 3,4,5), which helps to narrow the search in the future work.\\n\\n> Interestingly, it points out that LLMs fail on the AIW Light Arithmetic Total Girls but that is another anecdotal showcase of failure that does not tell us much about why LLMs fail on such simple problems.\\n\\nIt seems to be a misunderstanding as to what AIW Light control experiments are revealing. LLMs do not fail, but successfully solve AIW Light control problems across variations, achieving high correct response rates across variations without strong performance fluctuations, in contrast to AIW original (Fig E https://hackmd.io/_uploads/ByCpjM9M1x.png). As AIW Light problems are constructed to test operations that are also required to solve AIW original, by leaving problem template unmodified and altering the question for checking specific operations, successful handling of AIW Light proves that tested models are able to handle well all \\\"low-level\\\" operation and specific knowledge required for AIW, like parsing natural language, numbers, tokenization in general, grasping basic family relations, binding attributes to entities (eg female attribute via she pronomen to Alice) or performing arithmetic operations. Importantly, fluctuations on AIW Light variations disappear almost completely (Fig 3,4,5). Specifically, success in solving AIW Light Arithmetic Total Girls (Fig. 5) proves that models do not have any issue with binding of female attribute to Alice via \\u201cshe\\u201d, handling family structure and selecting and executing the correct arithmetic operation to count total girls number. Thus, contrary to anecdotal examples, these experiments allow to rule out low level trivial issues behind strong fluctuations and performance breakdown observed on AIW (Fig. 2,6), providing further evidence in favor of hypothesis stating generic generalization deficits behind the fluctuations.\\n\\n> \\u2026 a sufficient (but not necessary) condition to solve the problem is being able to perform 2-hops reasoning and counting (from one of the brothers to all the sisters). LLMs seem to lack such capability.\\n\\nIn similar line to preceding discussion, we think that 1) strong performance fluctuations observed on AIW variations (Fig B https://hackmd.io/_uploads/rJPFH38Gyl.png; Fig 2,6) effectively rule out that the breakdown is due to a particular same type of operation failure, as it would then manifest equally across variations where only differences are the differently instantiated numbers N,M, which means that if \\u201c2-hops reasoning and counting\\u201d would not work, then it would not be possible to observe at the same time high performance on one variation and breakdown on other, problem structure being the same - all variations would should equal breakdown, which we do not observe. 2) AIW Light control experiments rule out issues with failure of specific operations (Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png). For instance, models handle well AIW Light Family problem (Sec 2.1, Sec. 3.1.1, Fig. 4) which also requires to determine the entity of Alice\\u2019s sister and then to obtain the number of the brothers - a 2 hops operation. Again, evidence suggests that fluctuations on AIW are not due to failure of a specific operation required to handle AIW problem, but rather because of inability to robustly infer problem structure due to generic generalization deficits. \\n\\n> Question 1. What happens when N and M grow larger than 7, and why do they decide to set that as the upper bound on such variables?\\n\\nAIW problem was designed to be a simple common sense math problem where problem instantiations have realistic number values, thus N,M was chosen to correspond to values that are probable in a realistic family frame. We executed a control experiment with higher, exaggerated numbers (adding offset 60 to numbers used in AIW original), without observing unexpected qualitative differences (Fig. H https://hackmd.io/_uploads/H1sLdj2Mye.png). We see again rather low correct response rates and strong fluctuations across variations, with slightly lower correct response rates which might hint on further generalization deficits if numbers are outside of the expected problem specific range.\"}",
"{\"comment\": \"Thank you for running additional experiments. We have adjusted the score once more to 6. We hope that a slightly updated version of the code / data will be released that will allows for \\\"procedural\\\" augmentation of the data for better validation and use for the future.\"}",
"{\"comment\": \"We thank the reviewer for going through our work. We appreciate comments on novelty of the evaluation, on value of extensive experiments, on simple reproducibility and on clarity of the writing.\", \"we_further_appreciate_various_points_made_by_the_reviewer_and_would_like_to_address_those_in_following\": \"> Even though authors prove that SOTA LLMs fail on AIW task, I don't think we can claim that they are not capable of robust reasoning. On the contrary, paper shows that LLMs are capable of some types reasoning (like arithmetic, or basic family relations), but fail on the others (logical reasoning).\\n\\nBased on evidence from our work, we do not claim that LLMs are entirely incapable of reasoning (see also Sec.5, p.10). As shown by AIW Light control experiments (Fig. 3, 4, 5), where problem structure is simpler than AIW original, tested models are able to infer problem structure and successfully select and execute operations across AIW Light variations - handling basic family relations, elementary arithmetic and set operations, binding female attributes to entities (Alice, sisters) - operations also required to solve AIW original. We do claim that all SOTA LLMs fail to generalize robustly and to perform robust reasoning. This is evident from strong performance fluctuations exhibited by all SOTA LLMs (Fig. 1, 2, 6) on AIW original variations that merely change numbers in otherwise the same fixed AIW problem template (Fig. B https://hackmd.io/_uploads/rJPFH38Gyl.png). Although problem structure and difficulty stays the same across AIW variations 1-4, and same operations are required for AIW solution as for control AIW Light problems, correct response rates vary strongly across variations. Even for most advanced models like GPT-4, rates can jump wildly from 1 to 0 despite entirely unchanged problem structure and difficulty. This cannot be called robust reasoning, especially not given the simplicity of AIW problem that can be arguably easily handled by elementary school children. It also though cannot be called entire absence of reasoning, as correct responses are still present across variations, albeit with strongly different frequency. We argue thus that what we observe is generic deficit in generalization consistently present in all SOTA LLMs, leading to lack of model robustness and fluctuations that cannot be explained alone by failures of specific low-level operations (required eg for arithmetical reasoning). Such failures would manifest equally across AIW variations - merely changing numbers N, M does not affect the way of operation execution, and fluctuations would not happen. Models claiming robust generalization and reasoning should be expected to solve simple problems like AIW across all variations with nearly 100% success rate without strong fluctuations, which is not what we observe. Open question for future work is what breaks problem structure inference and consequently proper selection and composition of low-level operations (like elementary arithmetics or set operations) to be executed already in such simple scenarios.\\n\\n> Why do you think Llama-3 performs so much worse then Llama-2 model? I wonder, if there might be any issues with prompts, as Llama-3 models require different special symbols in chat template than Llama-2.\\n\\nLlama-3 does not perform much worse than Llama-2 - they rather perform equally bad. Misleading here is to look only at average correct response rates (Fig. 1), where the average is over AIW variations 1-4 and all RESTRICTED, STANDARD and THINKING prompt types. There, it might seem Llama 2 70B (p=0.3) outperforms Llama 3 70B (p=0.05) (see also Suppl. Tab. 7). However, a glance at the full distribution of correct response rates reveals both models are equally bad in handling the problem (Fig. D https://hackmd.io/_uploads/S1QjnkuMyx.png; paper Fig. 2). Eg Llama 2 70B shows extreme lack of robustness - significant performance is shown only on single AIW variation 3, all others being 0 or close to 0 (important to emphasize again that variations do not change problem structure and difficulty). Due to that single outlier, average correct rate is significant, but robust problem handling is obviously broken. Thus, Llama 2 has the same severe deficits as Llama 3. Llama 3 behavior cannot be explained by requiring special instruction templates, as also without templates it handles very well all the AIW Light control problems (Fig. 3,4,5, also fluctuations vanish) and shows competitive performance to Qwen 2 72B on female boost version (Fig. 6). Prompts are available at https://anonymous.4open.science/r/AITW_anonymous-69A6/prompts/prompts.json, collected data https://anonymous.4open.science/r/AITW_anonymous-69A6/collected_responses/raw_data_inspection/AIW_AIW_plus.json \\n\\n> formatting of the citations is wrong, punctuation issues\\n\\nThanks for helping to catch those. With regard to citations, using \\\\cite instead of \\\\citep caused havoc. We will correct all of that in the updated manuscript version.\"}",
"{\"title\": \"Rebuttal response 2\", \"comment\": \"> typos, format of most of the citations is wrong\\n\\nThanks for catching those. With regard to citations, seems using \\\\cite instead of \\\\citep caused havoc. We will correct all of that in the updated manuscript version.\"}",
"{\"title\": \"Response to this discussion\", \"comment\": \"While my biggest concerns still hold (one example is not sufficient to show any lack of reasoning, there is no consistent analysis of why these models fail, not even for open-source LLMs), I believe the authors replied to many of my concerns and I thus increase the score to 5.\"}",
"{\"title\": \"Follow-up\", \"comment\": \"We would like to thank the reviewer again for time and involvement in the discussions.\\n\\nAs follow-up, we would like to emphasize that in contrast to testing models on a set of various problems that do not have much in common and where source of variations is not controlled (which is the usual case for standardized benchmarks. eg GSM8K, Fig. A https://hackmd.io/_uploads/HJULH2IG1l.png), we confront the models with instances of the same simple problem, which allows us to see how sensitive models are to perturbations that keep problem structure and difficulty unchanged and thus should NOT affect ability to cope with the problem IF generalization were intact. Performance fluctuations across problem structure and difficulty preserving variations observed in this experimental setup can thus provide direct evidence for the degree of generalization breakdown, additionally backed up by control experiments to rule out low level issues (Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png). To gather robust statistics, we execute many trials (> 30) for each AIW variation, estimating correct response rates for each variation from those multiple trials (Fig. J, https://hackmd.io/_uploads/B1MRvATzye.png) and also checking whether behavior is consistent independent of prompting conditions using 3 various prompt types (Fig. B, https://hackmd.io/_uploads/rJPFH38Gyl.png). See Fig. K https://hackmd.io/_uploads/rkPA9H-myg.png for overview.\\n\\nTo confirm that the same observations hold for other simple problems of related kind, per reviewers\\u2019 request we conducted further additional experiments with AIW versions with modified problem templates that differ from AIW Original. For instance, we either introduce Alice and Bob as two entities in the problem structure to deal with, or we replace brothers and sisters entities with male/female friends, abandoning family specific frame. Using same experimental procedure to create variations of these problem versions, we observe the same pattern as for the AIW original, especially the strong fluctuations across variations, confirming the existence of the same generalization deficits for further problem examples (Fig. I https://hackmd.io/_uploads/BJ1nqj3MJx.png)\\n\\nWe hope this amounts to convincing evidence that experimental setup we describe in our work is useful for community as systematically reproducible measurement method for falsification of the strong function hypothesis and for debunking overblown claims that rely on standardized benchmarks and overlook such clear deficits. \\n\\nWe also would like to provide another illustrative example of such a debunking procedure on an example of recent case of NuminaMath-7B that was ranked 1st at the recent AIMO competition, solving 29/50 private set problems of olympiad math level (https://huggingface.co/AI-MO/NuminaMath-7B-TIR). Based on that evals, the claim was widely put forward that the model is capable of solving high school olympiad math problems (https://www.aimodels.fyi/models/huggingFace/numinamath-7b-tir-ai-mo). AIW problem has average elementary school level and does not require any advanced math knowledge. We tested NuminaMath-7B on AIW and observed a strong collapse of this model on AIW problem, with correct response rates close to 0 across AIW variations 1-4. Using AIW Light control problems, we can also see that NuminaMath-7B can handle all the low level operations and knowledge required to deal with family structure, ruling out that those are the issues. Using the AIW problem setting, we thus can contradict the strong claim of being capable to robustly deal with math problems (Fig F, https://hackmd.io/_uploads/SybG2hqz1x.png). Especially, breakdown in such a simple setting rules out that model will be able to deal robustly with olympiad level math tasks, debunking the claim and raising further questions about AIMO benchmark procedure used to measure model performance. \\n\\nFor the collected data for this debunking experiment, see anonymous repo https://anonymous.4open.science/r/AITW_anonymous-69A6/collected_responses/raw_data_inspection/NuminaMath-7B_AIW_versions_problem_set_checked.json\"}",
"{\"comment\": \"Thanks for your detailed response. I will maintain my score.\"}",
"{\"title\": \"Rebuttal summary, continuation 2\", \"comment\": \"# AIW as a tool to check for and warn against overblown claims of strong core function\\n\\nIn our work, using the simple AIW problem that requires only elementary set and arithmetic operations and can be easily solved by adults and arguably even children, we observe a striking breakdown of SOTA LLMs performance when confronted with the AIW problem variations (Fig. B https://hackmd.io/_uploads/rJPFH38Gyl.png, Fig. K https://hackmd.io/_uploads/rkPA9H-myg.png; paper Fig. 1, 2, Suppl. Tab. 2, 7). The breakdown is manifested in (i) overall low correct response rates (Fig. K https://hackmd.io/_uploads/rkPA9H-myg.png); paper Fig. 1) and (ii) strong performance fluctuation across natural variations of the same problem that do not affect problem structure or its difficulty, which reveals strong lack of robustness and hints at fundamental issues with the generalization capability of the models (Fig. K; paper Fig. 2,6). The observed breakdown is in dramatic contrast with claims about strong core functions of SOTA LLMs as backed up by standardized benchmarks, revealing benchmarks failure to properly measure core functions (Fig. 7, Suppl. Fig. 16,17,18,19). \\n\\nRelying on those benchmarks, it is still commonly-held position to attribute to SOTA LLMs advanced functions like robust zero-shot reasoning (e.g. [1], as one example of many), and in general to put high expectations of strong core functionality on released SOTA LLMs. Such claims extend beyond basic research artifacts and become pervasive in applied industry, where SOTA LLMs are advertised as robust problem solvers for various real world settings, explicitly emphasizing their value as robust reasoners, coders and math solvers, attesting \\\"key business-critical capabilities\\\" or suitability for \\\"real-world enterprise use cases\\\" (see announcements by Cohere on Command R-Plus [2], or by Mosaic on DBRX [3], as only few selected representative examples out of many - these models suffer collapse on simple AIW problem variations obtaining correct response rates close to 0 or 0 across variations (Fig. 1, Suppl. Fig. 8, Suppl. Tab. 7)., although our control experiments show that both models can handle all the operations necessary to solve the problem (Fig. 3,4,5) )\\n\\nGiven the situation where standardized benchmarks fail to detect obvious failures on core function, AIW problem together with the measurement procedure using its variations provides a way to debunk claims of strong function by testing models that report high scores on standardized benchmarks. \\n\\nOne particular scenario is debunking smaller scale models\\u2019 strong function claims. There is a persistent claim that overtraining smaller scale models leads to model performance that is almost on par with larger scale models, again relying on comparison with standardized benchmarks. On the contrary, we see in our experiments small scale models having much lower average correct response rates than larger scale ones, most of them collapsing close to 0 or having 0 rates across all variations. This discrepancy is not visible on standardized benchmarks. E.g., if testing GPT-4o-mini, the claimed close performance proximity to larger GPT-4o (backed up by standardized benchmarks, e.g. https://artificialanalysis.ai/models/gpt-4o-mini) falls apart (Fig. L https://hackmd.io/_uploads/BJ2M1-Mmke.png).\\n\\nAnother example of debunking overblown claims is a case of NuminaMath-7B that was ranked 1st at the recent AIMO competition, solving 29/50 private set problems of olympiad math level. The claim was widely put forward that the model is capable of solving high school olympiad math problems. AIW has arguably average elementary school level and does not require any advanced math knowledge. We tested NuminaMath-7B on AIW and observed a strong collapse of this model on AIW problem, with correct response rates close to 0 across AIW variations 1-4 (Fig. F, https://hackmd.io/_uploads/SybG2hqz1x.png). Using AIW Light control problems, we can also see that NuminaMath-7B can handle all the low level operations and knowledge required to deal with family structure, ruling out that those are the issues. Using the AIW problem setting, we thus can contradict the strong claim of being capable to deal with olympiad level high school math \\n\\nThus, AIW problem and its variations offer a measurement technique that can reveal lack of robustness and model weaknesses in generalization and core functions that remain undiscovered by current benchmarks. We think that our study can also serve as a vivid warning that many of the claims put forward for strong core functions of SOTA LLMs cannot be trusted, as they often rely on benchmarks that overlook clear function deficits, and AIW problem with its variations offers a tool for systematic, reproducible stress testing and debunking of such claims.\\n\\nReferences -> following final part\"}",
"{\"title\": \"Response 2\", \"comment\": \"I still hold the opinion, shared by most reviewers in this batch, that one example is not sufficient to say much about a model's capabilities. It would be interesting to study what happens in a model's internal (e.g., Llama) when it fails on the problem. Especially looking the attention scores (we know there are severe limitations to this mechanistic approach, but it is something) when a model generates the numerical solution.\\n\\nAgain, I believe the article is quite well written, but what lacks here is a consistent analysis of a few things, including whether a model failed because it cannot understand the notions of brother an sister, or it makes an assumption on Alice not being one of the sisters, or again the model cannot perform 2-hops reasoning in this specific case (while in many others, it can, see for example [2]).\\n\\n[2] Graph-enhanced Large Language Models in Asynchronous Plan Reasoning, ICML.\"}",
"{\"title\": \"Comment after Reviews\", \"comment\": \"Overall, this work is interesting and has good suggestions.\\n\\nOur last comment as reviewer summarises our proposal how to improve this work. Specifically, extending the dataset to make it more robust to future changes (and we do not mean massive expansion, however, meaningful enough for this work to be valid next year).\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thank you for providing a detailed answer to our review.\\n\\n(We apologise for answering late - as the current workload is quite high).\\n\\nWe raise the score to 5, however, we still have concerns and believe a revision is needed:\\n\\n---\\n1. First study:\\n\\n> To our knowledge, ours is the first study that systematically quantifies such lack of robustness in a simple scenario, seeing all SOTA LLMs exhibiting this phenomena and using control experiments to rule out other causes. \\n\\nThere are works such \\\"length generalisation of LLMs\\\" that study generalisation of input length. And more recently also \\\"What Makes Math Word Problems Challenging for LLMs?\\\" that studies math problems and variations based on input. (Among other such findings).\\n\\n---\\n2. Accuracy in rebuttal\\n\\n> This approach makes it possible to draw conclusions about generalization deficits, contrary to various previous anecdotal examples where such systematic measurements were not performed, findings were hardly reproducible and no clear conclusions about extent and nature of model failures were possible. \\n\\nIt would be good to be more precise when referring to previous work. (Which work was \\\"hardly reproducible\\\"?)\\n\\n---\\n3. Size of the dataset\\n\\n> In the simplest form, a benchmark for measuring generalization can be constructed relying on already existing AIW problem variations. Using a robustness score computed over the shape of measured fluctuations distribution, models can be ranked in their generalization capability. A larger and more diverse dataset can be created by procedurally generating further AIW versions, where further variations can be introduced, e.g. varying names of entities, relational structure of the problem, and so on.\\n\\nWe believe that this would be required for this work to be a meaningful contribution to our field. At this stage we think that it is too narrow of a study and dataset and a bigger more useful dataset would be interesting - especially since in the next update of LLMs this benchmarks might be already obsolete (due to its limited size and breadth).\"}",
"{\"summary\": \"The paper demonstrates that LLMs struggle with generalisation on a simple problem. Specifically the authors construct a question and various variations thereof of a `simple' family relationship question. (i.e. in short the question is: Alice has N brother and M sisters. How many sisters does Alice's brother have?) The paper demonstrates that small variations of this question break state-of-the-art LLMs, even across different prompting techniques.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The strengths of the paper:\\n1. Good number of experiments specific to the question demonstrated by the authors\\n2. Careful analysis across a wide variety of models.\\n3. Interesting finding that breaks models.\", \"weaknesses\": \"The weakness of the paper:\\n1. The study while interesting and definitely highlights a problems with modern LLMs is quite limited by the actual test set (basically being based on a single question and variations thereof).\\n2. The study is quite limited into the actual limitations of the model. \\n\\nConcretely, although many models were run on this small dataset (and it is understandable that so many models can only be run on smaller datasets [reasonably]), the contribution is quite limited regardless. **Finding specific phrasings that break a model is very common** to most problems and to most people that have done prompt engineering.\\n\\nFurthermore, the actual analysis while removes high-level doubts in the approach (such as the female boost, or the control question) do not provide deeper insights into what might be going on. Very interesting work in this regard would be \\\"Physics of Language Models\\\", which provides excellent analysis of how model perform and why they generalise poorly. https://physics.allen-zhu.com/\\n\\nGenerally, your work is very interesting and should be pursued further. Well done, however, in terms of research contribution it requires more interesting datasets (than single examples that work poorly and then others that work well, as mentioned earlier this is very common for most tasks). Also, your analysis should be much more detailed in terms of how and why models perform poorly or well on these tasks. (Again, the Physics of Language Models is an amazing work (not ours, unfortunately ;)).\", \"questions\": \"Some questions that could help you with you research:\\n1. What specifically do you think can discovered about LLMs using your research (going beyond that LLMs perform poorly on specific examples, but perform better on others?)\\n2. How could you construct a dataset that measures that specific quality?\\n3. How could you then propose methods to overcome a fundamental problem that you have identified?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"Summary: The paper investigates the generalization and reasoning deficits in state-of-the-art Large Language Models (LLMs) by employing a simple, common-sense reasoning problem called the \\\"Alice in Wonderland\\\" (AIW) problem. This problem reveals severe shortcomings in generalization and reasoning among advanced models like GPT-4 and Claude 3 Opus, even under variations of the problem that do not alter its difficulty or structure. The authors highlight the discrepancy between the claims of robust reasoning capabilities made by standardized benchmarks and the observed failures on simple tasks. Additionally, the paper introduces systematic variations and control tests to isolate the issue and provide evidence of fundamental deficits in LLMs\\u2019 ability to generalize across minimal perturbations.\", \"strengths\": [\"Important problem being studied and key shortcomings of LLMs being exposed\", \"Code and raw experimental data are made available for validation.\"], \"weakness\": [\"Lukewarm response from all but one reviewer and the positive reviewer didn't champion the paper\", \"Limited Scope of Problem: The analysis is focused on a single problem type and its variations, restricting the generalizability of the findings.\", \"Lack of Deeper Diagnostic Insights: While generalization failures are observed, the paper does not delve into the underlying architectural or training-related causes of these deficits.\", \"Comparative Benchmarking: Insufficient discussion on why standardized benchmarks fail to detect these limitations and how alternative designs could address this.\", \"Broader Dataset: Reviewers suggested expanding the dataset to ensure robustness and broader applicability.\", \"Some formatting and citation issues in the manuscript\"], \"decision\": \"Given the lack of enthusiasm from the reviewers and limited scope, unfortunately, the paper can't be accepted in its current form and addressing all the concerns would warrant another round of reviewing.\", \"additional_comments_on_reviewer_discussion\": [\"We thank the authors and reviewers for engaging during the discussion phase towards improving the paper. Below are some of the highlights:\", \"1. Single Problem Concern:\", \"Reviewers questioned whether conclusions could be drawn from one problem type\", \"Authors argued that their minimal test case was sufficient for falsifying strong generalization claims, supported by systematic variations and controls\", \"2. Interpretation of Results:\", \"Debate about whether results showed complete lack of reasoning vs specific deficits\", \"Authors clarified they weren't claiming complete inability to reason, but rather inconsistent generalization\", \"3. Technical Clarifications:\", \"Questions about model performance differences (e.g., Llama-2 vs Llama-3)\", \"Authors provided detailed analysis showing similar underlying issues despite surface differences\", \"4. Additional Experiments:\", \"Authors conducted new experiments with modified problem templates and different entity types\", \"Results reinforced original findings about generalization deficits\", \"The authors were highly responsive and provided detailed, evidence-based responses to all major concerns.\"]}",
"{\"summary\": \"The authors discovered a surprisingly simple and concise problem that makes most LLMs, including state-of-the-art models, fail. The problem, namely AIW, belongs to the class of basic reasoning problems where humans excel. The authors show that LLMs fail on the vanilla version of the AIW problem and semantically equivalent variations; techniques such as chain-of-thoughts or more advanced prompting methods fail at mitigating such issues.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"I am really surprised that such a simple example makes LLMs fail, so I consider this discovery valuable. I tried to prompt a few models, and I agree with the authors that this problem is indeed hard even for models like GPT-4 (though I tried a few times with GPT-o1-preview, and it correctly solves the task, but it surprisingly fails to reply with nonsense when we input negative numbers!).\\n\\nThe article is well written and easy to follow, and the AIW results are robust enough to support the claim that most LLMs fail on such problems.\", \"weaknesses\": \"The authors did not try to add illustrations (e.g., k-shot) to mitigate the issue. I tried to add an illustration, but some models still failed. That analysis would add value to the work. While more expensive, fine-tuning a small model would also add value to the consistency of the case study they present.\\n\\nBeyond that, my biggest concern is that the authors tried and reported only one example of failure across multiple models. \\nTo make an analogy with the adversarial robustness literature, this is equivalent to finding a single adversarial example in computer vision that makes most models misclassify an input (an example of a \\u2018universal trigger\\u2019).\\nThe paper lacks a consistent analysis of other examples, and, in this form, it reduces the contribution to an exciting yet anecdotal showcase of failure. Plenty of articles show failure cases of LLMs on *many* examples and variations; one very popular is [1].\\n\\nFurthermore, the authors do not provide a solution or a tentative plan to mitigate the problem (but that is not necessarily a limitation).\\nThe authors do not give a reasonable rationale behind why LLMs fail on the AIW problem. Interestingly, it points out that LLMs fail on the AIW Light Arithmetic Total Girls, but that is another anecdotal showcase of failure that does not tell us much about why LLMs fail on such simple problems.\\nFor example, a model that solves the vanilla AIW would possibly \\u201ccreate\\u201d a graphical representation of Alice and her brothers and sisters; then, the LLM can count the number of edges from one of the brothers connected to all the sisters. That means a sufficient (but not necessary) condition to solve the problem is being able to perform 2-hops reasoning and counting (from one of the brothers to all the sisters). LLMs seem to lack such capability.\\n\\n[1] Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks, Tomer Ullam.\", \"questions\": \"1) What happens when N and M grow larger than 7, and why do they decide to set that as the upper bound on such variables?\\n\\n2) Have the authors tried with floating and/or negative values for N and M? If a model still replies with the consistent (yet wrong) reasoning, that is a strong hint a model does not understand the task under consideration (i.e., it cannot connect numerical and graphical reasoning with family relationships).\\n\\n3) Have the authors tried to ask the model to generate a graphical representation of Alice\\u2019s family and then solve the task?\\n\\n4) Why do the authors focus on a single example and not on a consistent range of variations of similar problems?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethical concerns.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We thank the reviewer for the time dealing with our work. We appreciate reviewer\\u2019s remarks on the paper being well written, and also pointing to the clarity of the ideas presented and the reproducibility of the findings by open-sourcing data and code.\", \"we_also_appreciate_various_points_made_by_the_reviewer_and_would_like_to_address_those_in_following\": \"> The problem setting of AIW has certain interfering factors \\u2026 I believe that testing similar problems in mathematical reasoning tasks (like GSM8K or MATH) would be more convincing.\\n\\nWe think that problems formulated in GSM8K and MATH have on the contrary lengthier, overloaded formulations and thus a lot more interfering factors than the simple AIW problem (Fig. A, https://hackmd.io/_uploads/HJULH2IG1l.png ; Suppl. Fig. 16, 17 for bench failures). Main goal of our study was to find a \\u201cminimal\\u201d problem to convincingly falsify the hypothesis that current SOTA LLMs possess strong generalization and robust reasoning in zero-shot regime. Our intention was thus to reduce problem complexity as much as possible compared to standardized benchmarks and come up with a simple, short problem, unambiguously expressible in natural language, which despite its simplicity still could be used to stress test the core model capability for zero-shot generalization and reasoning. Contrary to problems used in GSM8K and MATH, AIW problem is a much simpler, short, common sense math problem that can be arguably easily solved by elementary school children. As we obtain strong evidence that all tested SOTA LLMs, including most advanced ones, cannot handle variations of that simple problem robustly and exhibit strong performance fluctuations, with correct response rate going from 1 to 0, despite variations being just different number instantiations (Fig. B, https://hackmd.io/_uploads/rJPFH38Gyl.png; paper Fig 1, 2), we can convincingly falsify the hypothesis without relying on more complex problems from GSM8K or MATH. Simple AIW structure also allows us to conduct control experiments (Fig. 3, 4, 5), ruling out failure in elementary operations as cause of the breakdown, which is unclear how to perform with GSM8K or MATH problems.\\n\\n> The issue here is that the LLM overlooks counting Alice herself, rather than lacking reasoning ability\\n\\nWhile we observe various occasional failing operations that lead to wrong AIW solutions (and we agree that overlooking counting Alice herself is one of those), we think that we can convincingly rule out low-level operation failure like those as the main cause of the observed breakdown. We obtain clear evidence for strong performance fluctuations on problem irrelevant AIW variations 1-4 (instantiating different brothers and sisters numbers N,M) that all tested SOTA LLMs consistently exhibit (paper Fig 2, 6, Suppl. Fig 10; Fig. B https://hackmd.io/_uploads/rJPFH38Gyl.png). These fluctuations are impossible to explain when assuming the same low-level operations behind the performance - eg, failure to count Alice should then affect all variations the same way, as merely changing numbers N, M does not change the way of counting. Another evidence is obtained by our AIW Light control experiments (Fig. 3, 4, 5), where problems are constructed to contain operations that are also required to solve AIW original, though asking different questions to make the problem further simpler. We see fluctuations largely disappear, and models collapsing on AIW original show high correct response rates across all variations. Thus, we can prove low-level operations - handling family relations, binding female attributes, elementary arithmetics like counting - are actually intact. Remaining hypothesis is then generic generalization and basic reasoning deficits that result in failure to robustly infer problem structure. This explains the strong performance difference across different AIW variations despite the problem structure being the same, assuming different operations are inferred and executed.\\n\\n> AIW and GSM-Symbolic comparison\\n\\nGSM-Symbolic (GSM-S) work follows our method to use problem templates and to introduce variations, measuring then how sensitive models are wrt. to variations with the same intention - drawing conclusions from observed model robustness or lack thereof about generalization and reasoning capabilities. We think GSM-S went the right way, however the evidence obtained there is rather weak to claim severe deficits like done in AIW (Fig. C, https://hackmd.io/_uploads/Sk7aBh8Gyx.png). There are two clear differences: 1) in our work, we observe strong fluctuations even for most advanced models like GPT-4, with scores going up to 1 and down to 0 across variations. GSM-S sees only weak fluctuation, eg. for GPT-4o being around 0.07. 2) we see many models collapsing to very low rates, while in GSM-S rates are still high. Eg Llama-3-8B scores $< 0.15$ on AIW, while $>0.69$ on GSM-S. Thus, while AIW provides clear breakdown evidence, evidence on GSM-S is inconclusive.\"}",
"{\"title\": \"Rebuttal summary, continuation 1\", \"comment\": \"# Controlling for low-level issues, contrast to reports of anecdotal model failures\\n\\nWe also would like to stress that one of the motivations behind our work was to depart from anecdotal reports on various failures of SOTA LLMs done in the past. To gather evidence for model breakdown and postulate existing generalization deficit, we performed systematic evaluation study across a large variety of SOTA LLMs with robust statistics accumulated over multiple trials (> 30 trials for each AIW variation 1-4 and each of 3 prompt types) under various conditions while controlling accurately for source of variation (Overview: Fig. K https://hackmd.io/_uploads/rkPA9H-myg.png). This is in contrast to previous observations attempting to showcase model breakdown on simple problems. For instance, the reports on the problems like \\u201cWhat is larger, 9.11 or 9.9\\u201d or \\u201cCount letters \\u201cr\\u201d in the word \\u201cStrawberry\\u201d \\u201d were suffering from lack of systematic evaluation, reporting single cases of failure without executing multiple trials and estimating failure rates and without controlling for various conditions, eg prompt types. Often, such problems suffer from ambiguous character (eg, adding \\u201creal numbers\\u201d may disambiguate the question above avoiding interpretation as dates) that makes it unclear whether model failure is due to ill posed problem specification or indeed due to core function deficit. In contrast, AIW problem was made intentionally to be simple to handle, without any ambiguities or quirks in the formulation.\\n\\nAnother issue common to anecdotal reports are confounds in form of low level issues, eg inability to parse the input properly due to tokenization issues, like it might be the case for \\u201cStrawberry\\u201d, or in general, inability to access and deal with highly specific problem knowledge, which might cause model function breakdown although actual generalization and basic reasoning might still be intact. In our study, we made an effort to rule out such low level issues causing observed model breakdown on AIW by conducting control experiments using AIW Light problems. AIW Light problems are constructed to test operations that are also required to solve AIW original, by keeping the problem template unmodified and altering the question for checking specific operations (Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png, Sec 2.1, Sec 3.1.1). By observing that models handle AIW Light problems successfully, we prove that tested models are able to handle well all \\\"low-level\\\" operation and specific knowledge required for AIW, like parsing natural language, numbers, tokenization in general, grasping basic family relations, binding attributes to entities (eg female attribute via she pronomen to Alice) or performing arithmetic operations. Importantly, fluctuations on AIW Light variations disappear almost completely (Fig. E; paper Fig. 3,4,5). \\n\\nThus, contrary to anecdotal examples, our control experiments allow us to rule out low level trivial issues or other quirks behind strong fluctuations and performance breakdown observed on AIW (Fig. 1,2,6), strengthening evidence in favor of the hypothesis stating generic generalization deficits behind the fluctuations.\\n\\n# Additional experiments with evidence for generalization deficits on further problem examples\\n\\nPer reviewers request, we executed a number of additional experiments that we think further strengthen evidence pointing to severe generalization deficits in SOTA LLMs. \\n\\nOne experiment concerns an AIW version where numbers for brothers and sisters are instantiated to be in an exaggerated range not realistic for a typical family scenario. In this AIW version, offset 60 is added to numbers in AIW original problem. We observe the same pattern as in AIW original (Fig. H https://hackmd.io/_uploads/H1sLdj2Mye.png) - low correct response rates and strong fluctuations across variations. We also see slightly lower correct response rates on average compared to AIW original. This might point to further generalization deficits becoming apparent when dealing with numbers outside the expected problem specific range, despite problem structure left unchanged. \\n\\nFurther, to confirm that the same observations hold for other simple problems of related kind, we conducted experiments with AIW versions with modified problem templates that differ from AIW Original. For instance, we either introduce Alice and Bob as two entities in the problem structure to deal with, or we replace brothers and sisters entities with male/female friends, abandoning family specific frame. Using same experimental procedure to create variations of these problem versions, we observe the same pattern as for the AIW original, especially the strong fluctuations across variations, confirming the existence of the same generalization deficits for further problem examples (Fig. I https://hackmd.io/_uploads/BJ1nqj3MJx.png)\"}",
"{\"comment\": \"Thanks for the rebuttal. But my concern remains. It's hard to judge a model's reasoning capabilities from one type of tests. It just doesn't make sense to me. You can always find something that the model is bad at. To evaluate reasoning, one has to look at the performance on average, otherwise the results hold no statistical value.\\n\\nTo put this in another way, if I test human a bunch of tricky questions, and they get one question wrong, I wouldn't argue that human are fundamentally flawed at reasoning.\"}",
"{\"comment\": \"We thank the reviewer for getting involved in the discussion and appreciate the expression to raise the score. In following, we would like to address the points mentioned:\\n\\n> First study : \\u2026 works such as \\\"length generalisation of LLMs\\u201d [1] \\u2026 \\\"What Makes Math Word Problems Challenging for LLMs?\\\" [2]\\n\\nWe think that our study is the first to provide convincing falsification of the hypothesis that posits strong zero-shot generalization and robust basic reasoning exhibited by current SOTA LLMs - including the advanced ones like GPT-4 and Claude-3 Opus. We do this by testing model generalization using AIW problem and its variations to measure model sensitivity to perturbations that keep both problem structure AND its difficulty unchanged (see for overview Fig. K https://hackmd.io/_uploads/rkPA9H-myg.png). AIW problem is formulated in simple natural language common sense manner, and for testing, we use large variety of recent SOTA LLMs. In contrast to this, [1] uses synthetic tasks where variations change both problem structure and difficulty (e,g., when changing problem length), which introduces confounds and makes it hard to draw clear conclusions about generalization deficits. Further, tests are done only on one single model (LamDA dated back to 2022), such that it is not clear whether observed effects are specific to that particular model and training procedure only (LamDA family also never put forward claim to be robust problem solver, in contrast to current SOTA LLMs). [2] does not study variations or generalization breakdown at all - it has entirely different focus, taking GSM8K problems, collecting responses to those problems by 4 different models and training classifiers to predict whether an LLM succeeds or fails given a problem, to identify what makes GSM8K problems easier or harder to solve. This does not allow any conclusions about model generalization, in contrast to focused falsification we perform.\\n\\n> \\u2026 more precise when referring to previous work \\u2026 (Which work was \\\"hardly reproducible\\\"?)\\n\\nHere we were referring to various anecdotal examples that were widespread in the ML community but never received proper systematic treatment and evaluation. For instance, the reports on the problems like \\u201cWhat is larger, 9.11 or 9.9\\u201d or \\u201cCount letters \\u201cr\\u201d in the word \\u201cStrawberry\\u201d \\u201d were suffering from lack of systematic evaluation, reporting single cases of failure without executing multiple trials to estimating robust statistics (eg failure rates) and without controlling for various conditions, eg prompt types. \\u201cHardly reproducible\\u201d refers here both to insufficient rigor behind the experimental procedure that independent parties can reproduce under same conditions and also lack of repositories containing raw data and code to execute procedures to obtain the same results. We provide the collected raw data and exact prompts used to execute experiments in anonymous repo for the review (https://anonymous.4open.science/r/AITW_anonymous-69A6/README.md), and public repo with all source code necessary to reproduce the experiments will be available after the review procedure.\\n\\n> \\u2026 a benchmark for measuring generalization can be constructed relying on already existing AIW problem variations \\u2026 we believe that this would be required for this work to be a meaningful contribution to our field. At this stage we think that it is too narrow of a study and dataset and a bigger more useful dataset would be interesting - especially since in the next update of LLMs this benchmarks might be already obsolete (due to its limited size and breadth).\\n\\nWe very much agree that a benchmark measuring generalization constructed from the insights won in our study is the way to go and we work on follow up aiming for such a benchmark. We would like to note however that science is an incremental process. Motivation for such a benchmark has to come from work that first properly clarifies that current SOTA LLMs generalization is not what it seems as reported by current standardized benchmarks, so it also becomes clear that current benchmarks are not good enough for measuring generalization and need to be replaced. Our work reveals both model and benchmark flaws (Fig. 7, Suppl. Fig. 16,17,18,19), sending a warning signal to the community to be careful in trusting strong function claims based on current benchmarks and sketching a potential procedure that can provide a better measurement tool. We hope this can be acknowledged as an important step worth publishing. We would also like to stress that AIW problem and its variations are not to be seen as a dataset - this is a focused minimal experimental setup and measurement procedure sufficient to induce model generalization breakdown and provide necessary evidence to falsify the hypothesis of the robust generalization in current SOTA LLMs.\"}",
"{\"title\": \"Additional experiments measuring generalization breakdown using further problem examples\", \"comment\": \"We would like to emphasize again that our work introduces a measurement procedure for model generalization that makes use of the same simple problem template to generate problem structure and difficulty preserving variations (AIW variations 1-4; see Fig. K for overview https://hackmd.io/_uploads/rkPA9H-myg.png). Thus, we confront the models with instances of the same simple problem, which allows us to see how sensitive model is to problem perturbations that should NOT affect ability to cope with the problem IF generalization is intact. It is in contrast to testing models on a set of various problems that do not have much in common and where source of variations is not controlled, which is the usual case for standardized benchmarks (eg GSM8K, Fig. A https://hackmd.io/_uploads/HJULH2IG1l.png). To gather robust statistics, we execute many trials (> 30) for each such variation, estimating correct response rates for each variation from those multiple trials (Fig. J, https://hackmd.io/_uploads/B1MRvATzye.png). Amount of existing fluctuations in correct response rates across the variations reveals then model sensitivity/robustness and enables conclusions about generalization or its breakdown.\\n\\nTo further strengthen evidence that technique of introducing problem structure \\\\& difficulty preserving variations into same simple problem template can be used to measure generalization breakdown, we executed per reviewers request a number of additional experiments using various problem examples that we think provide conclusive evidence that measurement procedure works independent of a chosen problem type.\\n\\nTo confirm that the same observations hold for other simple problems of related kind, we conducted additional experiments with AIW versions with modified problem templates that differ from AIW Original. For instance, we either introduce Alice and Bob as two entities in the problem structure to deal with, or we replace brothers and sisters entities with male/female friends, abandoning family specific frame. Using same experimental procedure to create variations of these problem versions, we observe the same pattern as for the AIW original, especially the strong fluctuations across variations, confirming the existence of the same generalization deficits for further problem examples (Fig. I https://hackmd.io/_uploads/BJ1nqj3MJx.png)\\n\\nWe hope this amounts to convincing evidence that experimental setup we describe in our work is useful for community as systematically reproducible measurement method for falsification of the strong function hypothesis and for debunking overblown claims that rely on standardized benchmarks and overlook such clear deficits. \\n\\nWe also provide another illustrative example of such a debunking procedure on an example of recent case of NuminaMath-7B that was ranked 1st at the recent AIMO competition, solving 29/50 private set problems of olympiad math level (https://huggingface.co/AI-MO/NuminaMath-7B-TIR). Based on that evals, the claim was widely put forward that the model is capable of solving high school olympiad math problems (https://www.aimodels.fyi/models/huggingFace/numinamath-7b-tir-ai-mo). AIW problem has average elementary school level and does not require any advanced math knowledge. We tested NuminaMath-7B on AIW and observed a strong collapse of this model on AIW problem, with correct response rates close to 0 across AIW variations 1-4. Using AIW Light control problems, we can also see that NuminaMath-7B can handle all the low level operations and knowledge required to deal with family structure, ruling out that those are the issues. Using the AIW problem setting, we thus can contradict the strong claim of being capable to robustly deal with math problems (Fig F, https://hackmd.io/_uploads/SybG2hqz1x.png). Especially, breakdown in such a simple setting rules out that model will be able to deal robustly with olympiad level math tasks, debunking the claim and raising further questions about AIMO benchmark procedure used to measure model performance.\\n\\n(For the collected data for this debunking experiment, see anonymous repo https://anonymous.4open.science/r/AITW_anonymous-69A6/collected_responses/raw_data_inspection/NuminaMath-7B_AIW_versions_problem_set_checked.json)\"}",
"{\"comment\": \"We thank the reviewer for the comprehensive feedback raising number of points, which would like to address those in following:\\n\\n> 1. The whole paper is about one type of questions \\u2026 hard to judge model\\u2019s capabilities based on one type of question alone. \\n\\nMain goal of our paper is to provide convincing evidence for the falsification of the hypothesis stating current SOTA LLMs possess robust generalization and reasoning which is backed up by current standardized benchmarks. To do this, we follow the scientific method and search for a \\u201cminimal\\u201d experiment/problem - as simple as possible to already provide sufficient evidence - to convincingly falsify the strong function hypothesis. AIW problem satisfies this requirement - it is so simple that it can be arguably solved by elementary school children, having short, concise, unambiguous formulation in natural language (Fig. A https://hackmd.io/_uploads/HJULH2IG1l.png, Suppl. Tab. 2). Any system that claims robust zero-shot generalization and basic reasoning should be able to solve it across variations in numbers in the problem template with correct response rate close to 100%. As advanced SOTA LLMs like GPT-4/4o or Claude Opus/Sonnet claim generalization and reasoning on high school or even PhD level problems, observed lack of robustness and strong performance fluctuation across variations of such a simple problem (Fig. B https://hackmd.io/_uploads/rJPFH38Gyl.png; paper Fig 1,2,6) clearly disproves the claim, also revealing the deficits in standardized benchmarks to detect such clear generalization failures (Fig 7, Suppl. Fig. 16,17). Goal of hypothesis falsification is thus fulfilled by using one problem type that delivers sufficient evidence. \\n\\n> 2. GPT-4o superior performance, model size or training?\\n\\nWe see clear effect of pre-training scale on the observed AIW performance. The only stronger performers that show correct response rates > 0.3 across variations, are large scale pretraining models like GPT-4 and Claude Opus (Fig. 1; Suppl. Tab. 7). All those stronger performers suffer though from strong fluctuations across AIW variations (Fig 2, 6). Models at smaller scales, eg Llama 3 8B, are staying well below 0.2, most of them residing close to 0 or collapsing entirely to 0 across all variations (Suppl. Fig. 8).\\n\\n> 3. I don\\u2019t consider \\u201cfemale boost\\u201d as totally redundant information. \\u2026 \\u201cShe\\u201d as a sole indicator of Alice being a female is more a syntactic problem, which shouldn\\u2019t part of model\\u2019s burden \\u2026\\n\\nWe agree that to solve the AIW problem, SOTA LLMs have to infer the problem relevant information from natural language description and this requires also handling properly handling language syntax. Either using \\u201cAlice is female\\u201d or using \\u201cshe\\u201d conveys same information - Alice being female - if language handling is successful, therefore we state that adding \\u201cAlice is female\\u201d does NOT add new information to natural language problem description. To rule out that such low level language parsing/handling issues are the problem behind observed failures, we conducted control experiments using AIW Light Arithmetic Total Girls problem version (Sec. 2.1, 3.1.1; Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png), which tests whether binding of female attribute via \\u201cshe\\u201d is handled properly. We see most models that suffer clear breakdown on AIW solve AIW Light successfully, with high correct response rates and - importantly - vanishing fluctuations (Fig. 5). Thus, \\u201cfemale boost\\u201d (Fig. 6) is evidence for performance change due to entirely redundant information, again pointing to lack of model robustness and generalization deficit.\\n\\n> 4. \\u2026 paper adds nothing significantly to the existing discussion on whether LLM can reason or generalize. \\u2026 a pure test on reasoning should not rely much on extra knowledge. The AIW test ... needs model\\u2019s understanding of \\u201cshe\\u201d as Alice (syntactic) and basic family structure (external knowledge). The actual reasoning \\u2026 is in my opinion \\u2026 not the main bottleneck ... also supported in the paper where \\u201cfemale boost\\u201d can improve performance.\\n\\nAIW problem is constructed exactly in a way to minimize the amount of knowledge necessary to handle it - in contrast to problems overloaded with various contexts used in standardized benchmarks (Fig. A). Using AIW Light control experiments (Sec. 2.1, 3.1.1), we prove that handling basic family structure, binding female attributes via \\u201cshe\\u201d, elementary arithmetics like counting - are all intact (Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png; paper Fig. 3,4,5), ruling out that this additional knowledge poses an issue. Given the models show the ability to cope successfully with all required knowledge and operations to solve AIW, remaining hypothesis assumes generic generalization deficits responsible for observed strong fluctuations on AIW. \\u201cFemale boost\\u201d is in line with that - behavior changes despite just adding redundant information, with strong fluctuations across variations still persisting (Fig. 6).\"}",
"{\"comment\": \"We would like to thank the reviewer again for time and involvement in the discussions.\\n\\nAs follow-up, we would like to emphasize that in contrast to testing models on a set of various problems that do not have much in common and where source of variations is not controlled (which is the usual case for standardized benchmarks. eg GSM8K, Fig. A https://hackmd.io/_uploads/HJULH2IG1l.png), we confront the models with instances of the same simple problem, which allows us to see how sensitive models are to perturbations that keep problem structure and difficulty unchanged and thus should NOT affect ability to cope with the problem IF generalization were intact. Performance fluctuations across problem structure and difficulty preserving variations observed in this experimental setup can thus provide direct evidence for the degree of generalization breakdown, additionally backed up by control experiments to rule out low level issues (Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png). To gather robust statistics, we execute many trials (> 30) for each AIW variation, estimating correct response rates for each variation from those multiple trials (Fig. J, https://hackmd.io/_uploads/B1MRvATzye.png) and also checking whether behavior is consistent independent of prompting conditions using 3 various prompt types (Fig. B, https://hackmd.io/_uploads/rJPFH38Gyl.png). See Fig. K https://hackmd.io/_uploads/rkPA9H-myg.png for overview.\\n\\nTo confirm that the same observations hold for other simple problems of related kind, per reviewers\\u2019 request we conducted further additional experiments with AIW versions with modified problem templates that differ from AIW Original. For instance, we either introduce Alice and Bob as two entities in the problem structure to deal with, or we replace brothers and sisters entities with male/female friends, abandoning family specific frame. Using same experimental procedure to create variations of these problem versions, we observe the same pattern as for the AIW original, especially the strong fluctuations across variations, confirming the existence of the same generalization deficits for further problem examples (Fig. I https://hackmd.io/_uploads/BJ1nqj3MJx.png)\\n\\nWe hope this amounts to convincing evidence that experimental setup we describe in our work is useful for community as systematically reproducible measurement method for falsification of the strong function hypothesis and for debunking overblown claims that rely on standardized benchmarks and overlook such clear deficits. \\n\\nWe also would like to provide another illustrative example of such a debunking procedure on an example of recent case of NuminaMath-7B that was ranked 1st at the recent AIMO competition, solving 29/50 private set problems of olympiad math level (https://huggingface.co/AI-MO/NuminaMath-7B-TIR). Based on that evals, the claim was widely put forward that the model is capable of solving high school olympiad math problems (https://www.aimodels.fyi/models/huggingFace/numinamath-7b-tir-ai-mo). AIW problem has average elementary school level and does not require any advanced math knowledge. We tested NuminaMath-7B on AIW and observed a strong collapse of this model on AIW problem, with correct response rates close to 0 across AIW variations 1-4. Using AIW Light control problems, we can also see that NuminaMath-7B can handle all the low level operations and knowledge required to deal with family structure, ruling out that those are the issues. Using the AIW problem setting, we thus can contradict the strong claim of being capable to robustly deal with math problems (Fig F, https://hackmd.io/_uploads/SybG2hqz1x.png). Especially, breakdown in such a simple setting rules out that model will be able to deal robustly with olympiad level math tasks, debunking the claim and raising further questions about AIMO benchmark procedure used to measure model performance. \\n\\nFor the collected data for this debunking experiment, see anonymous repo https://anonymous.4open.science/r/AITW_anonymous-69A6/collected_responses/raw_data_inspection/NuminaMath-7B_AIW_versions_problem_set_checked.json\"}",
"{\"title\": \"Rebuttal Continuation, 4\", \"comment\": \"> Question 2. Have the authors tried with floating and/or negative values for N and M? If a model still replies with the consistent (yet wrong) reasoning, that is a strong hint a model does not understand the task under consideration (i.e., it cannot connect numerical and graphical reasoning with family relationships).\\n\\nWe did not try to experiment with unrealistic numbers. Instead we conducted AIW Light control experiments with natural realistic formulations to check whether low level operations and additional family specific knowledge can be successfully handled by the tested models, confirming they do not have issues to perform arithmetic operations within family relationship structure (Fig E https://hackmd.io/_uploads/ByCpjM9M1x.png; Fig 3,4,5)\\n\\n\\n> Question 3. Have the authors tried to ask the model to generate a graphical representation of Alice\\u2019s family and then solve the task?\\n\\nWe indeed were experimenting with prompting models to generate various intermediate representations, including graphical representations and SQL code (Sec. 3.2, p.9, Supp.l Sec G, Suppl. Fig. 32). We observed similar behavior, where poor performing models cannot generate such representations and stronger performing models cannot generate correct representations robustly across variations. Thus, asking to work with explicit intermediate representations does not seem to improve the situation. It has to be also noted that hinting models to attempt a certain type of representation construction helpful for the problem can be considered as an implicit hint how to proceed with problem solution and cannot be thus compared directly to problem formulation that does not reveal such implicit hints.\\n\\n> Question 4. Why do the authors focus on a single example and not on a consistent range of variations of similar problems?\\n\\nWe do provide a consistent range of variations, being the instantiations of numbers N,M in the AIW problem template (Fig. A https://hackmd.io/_uploads/HJULH2IG1l.png; Suppl. Tab. 2). Crucially, as we carefully control the source and type of variations that do not change problem structure and its difficulty, being natural part of problem formulation, it allows us to make conclusions from observed model sensitivity to those variations about existing generalization deficits. As discussed previously, AIW problem provides us thus the minimal setting delivering sufficient evidence (together with AIW Light control experiments) to falsify hypothesis of strong zero shot generalization claimed by SOTA LLMs, so that further problems are not required. We have also conducted further experiments with similar problems to have further confirmation of the observed behavior, replacing for instance brothers and sisters with male/female friends or introducing Alice and Bob as two entities in the family structure, while keeping the same procedure for creating variations by instantiating numbers N,M. We have observed the same pattern in those experiments as for AIW original, further confirming the original findings (Fig. I https://hackmd.io/_uploads/BJ1nqj3MJx.png)\"}",
"{\"comment\": \"Thanks for the swift response. We think there is still a general misunderstanding how our work proceeds to demonstrate severe generalization deficit, as evident from the arguments in the response:\\n\\n> If I test human a bunch of tricky questions, and they get one question wrong, I wouldn't argue that human are fundamentally flawed at reasoning.\\n\\nThis is an entirely wrong analogy wrt. to our work, and we agree that works using such an approach would not be able to draw proper conclusions. We do not have an arbitrary bunch of tricky questions. We have an opposite situation. We have one simple question which can be arguably handled easily by elementary school children. We then generate variations using the question template, such that variations are natural for the posed problem and none of them changes the problem structure and difficulty. Thus, the bunch consists of problem instances generated from same simple problem where merely numbers are varied in the corresponding placeholder variables. We estimate correct response rate from many trials (at least 30) for each of problem instances in the bunch, such that we obtain correct response rate for each of AIW variations 1-4 (thus here, \\\"get one question wrong\\\" notion is actually not appropriate - we can only speak about lower or higher correct response rates over many trials). Puzzling is here exactly why there should be ANY difference in performance across such variations at all, as rate of getting \\\"one question wrong\\\", or right, across many executed trials should be roughly equal for all variations as they pose the very same problem - IF generalization were intact. \\n\\nIf we imagine how human would perform on such a task, we definitely DO NOT expect that their performance to solve such a bunch would wildly vary between very high and very low across variations (Fig. J https://hackmd.io/_uploads/B1MRvATzye.png). Imagine a test with human probands where a person is confronted with these variations of the AIW in many repetitive trials \\n\\n> Variation 1: Alice has 3 brothers and she also has 6 sisters. [Correct answer: 7 ] \\\\\\n> Variation 2: Alice has 2 sisters and she also has 4 brothers. [Correct answer: 3 ] \\\\\\n> Variation 3: Alice has 4 sisters and she also has 1 brother. [Correct answer: 5 ] \\\\\\n> Variation 4: Alice has 4 brothers and she also has 1 sister. [Correct answer: 2 ] \\\\\\n> \\\\\\n> How many sisters does Alice\\u2019s brother have?\\n\\nArguably, it would be shocking to observe persons having consistently over many trials close to 100% correct responses on Variation 4, while entirely breaking down on Variation 3 close to 0 (Fig. J; paper Fig. 2), as variations actually pose the very same simple problem without altering problem structure, only difference being in instantiated numbers. That is because human would be able to generalize, extracting the actual problem structure behind the bunch, and thus solve the problem independent of instantiated numbers equally well. This is how we also tested generalization of the SOTA LLMs, and see them consistently failing (Fig. B https://hackmd.io/_uploads/rJPFH38Gyl.png; paper Fig. 1,2,6), which we think given all the persisting claims about those models successfully handling math problems at olympiad level equally shocking and points to clear flaws not only in models, but also in benchmarks.\\n\\n> To evaluate reasoning, one has to look at the performance on average, otherwise the results hold no statistical value.\\n\\nIn our work, we test generic kind of reasoning - ability to infer underlying problem structure, which enables generalization and thus handling problems despite variations in their formulation. It matters which average to take to test this. Looking at performance averaged over all variations would be misleading, as breakdown of generalization is manifested in models' sensitivity to variations that is NOT visible if averaging over all variations. Eg, as evident in Fig. 1, 2, average correct response rates of better performers like GPT-4/4o or Claude 3 Opus, can be substantial. Only when looking at correct response rates of each variation separately, it becomes apparent that such average results from high performance on some, and low or very low close to 0 performance on other variations - despite variations being instances of the same problem (Fig. J https://hackmd.io/_uploads/B1MRvATzye.png). This breakdown in generalization would remain undetected if averaging over variations, and this is what also happens with standardized benchmarks that do not look into measuring performance on variations of problem instances (Fig. 7, Suppl. Fig. 16,17,18,19), stating high performance across static questions and creating misleading illusion of strong function where there are clear deficits. We thus average over right/wrong responses for each variation separately (>30 trials), obtaining statistics that allows conclusion about model sensitivity to variations, pointing to robustness or lack thereof (see again Fig. J for instructive overview).\"}",
"{\"title\": \"Rebuttal summary\", \"comment\": \"We would like to thank all reviewers for the feedback. We provide a summary of the rebuttal, containing points we think are helpful to emphasize for discussion.\\n\\n# Testing generalization and falsifying SOTA LLMs strong function hypothesis\\n\\nWhile reviewers stressed the clarity of the idea and extensive experiments in our work, we would like to clarify a potential misconception wrt. to the work that we think is repetitively echoing through the feedback. This misconception concerns the purpose of the AIW problem constructed in our work for testing generalization, relation of this problem to previous anecdotal attempts to induce model breakdowns of various kinds and the role of systematically controlled variations in properly measuring model sensitivity to problem structure preserving perturbations that reveals generalization deficits. \\n\\nMain goal of our study was to test the claims of strong function attributed to current SOTA LLMs, specifically regarding strong zero-shot generalization and robust basic reasoning. Such claims are mainly based on standardized benchmarks (MMLU, HellaSwag, GSM8k, MATH, etc) where current SOTA LLMs obtain high scores. We were curious whether these claims can be put to test using much simpler, more minimalistic problems, as problems used in standardized benchmarks have often formulations overloaded with various additional context and knowledge. We aimed for a problem as simple as possible though still useful for measuring generalization of SOTA LLMs. \\n\\nAIW problem, which has a short, unambiguous natural language formulation, was the outcome of this effort (Fig. A https://hackmd.io/_uploads/HJULH2IG1l.png; Suppl. Tab. 2). Despite its simplicity, we observed surprisingly low correct response rates averaged across various conditions for all SOTA LLMs, including the most advanced ones like GPT-4/4o or Claude 3 Opus (Fig. 1, Suppl. Fig. 8). The scores were substantially lower than the ones SOTA LLMs obtain on benchmarks with seemingly more complex problems, already posing an intriguing discrepancy to standardized benchmark measurements. \\n\\nMore importantly however, introducing controlled natural variations into AIW problem template by instantiating numbers N,M involved in its formulation, we created a procedure that measures model sensitivity to perturbations that preserve problem structure and difficulty. This allowed us to probe models\\u2019 generalization - should it be intact, model would be capable of handling the problem equally well across variations. This is in contrast to standardized benchmarks, where problems are static and there is no default way to measure model sensitivity to problem variations. Using this procedure, we observed strong fluctuations across problem variations (Overview: Fig. K https://hackmd.io/_uploads/rkPA9H-myg.png; Fig. 2, 6). This is surprising not only because the problem is simple, but also because AIW variations do not change problem structure or its difficulty, being just instantiations of different numbers natural to the problem. Puzzling is why there should be any difference in performance across AIW variations at all. Given the variations should be actually irrelevant for problem handling, expected would be either struggling equally to solve any of the variations or handling them equally successful - assuming generalization is intact. Thus, observing such strong fluctuations in all models that manage to get significant non-zero average correct response rates, we can conclude that models are not able to properly infer the same underlying problem structure which is behind each AIW variation, which, given simplicity of the posed problem, points to severe generalization deficits. \\n\\nWe think it can be helpful to imagine how humans would perform on such AIW variations, to stress what the observed breakdown of model robustness and overall low average correct response rates mean. Having a test with average human probands confronted with the AIW variations over many trials (we do > 30 trials per variation to compute correct response rates)\\n\\n> Variation 1: Alice has 3 brothers and she also has 6 sisters. [Correct answer: 7 ] \\\\\\n> Variation 2: Alice has 2 sisters and she also has 4 brothers. [Correct answer: 3 ] \\\\\\n> Variation 3: Alice has 4 sisters and she also has 1 brother. [Correct answer: 5 ] \\\\\\n> Variation 4: Alice has 4 brothers and she also has 1 sister. [Correct answer: 2 ] \\\\\\n> \\\\\\n> How many sisters does Alice\\u2019s brother have?\\n\\nwe think it would be highly shocking if obtained statistics would reveal performance across variations varying wildly between very high and very low rates, the way we observe it for SOTA LLMs. We expect humans to handle any variation equally well, as we expect generalization and basic reasoning to be intact. It therefore should be equally shocking to observe SOTA LLMs breakdown in such a simple scenario, given the claims of robust graduate level problem solving put forward for models of GPT-4 and Claude 3 Opus class.\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Dear reviewer,\\n\\nwe would like to thank again for the time dealing with our work. As the author-reviewer discussion comes near its end, we would like to draw attention to the ongoing discussion (which seems still unfinished to us) where we attempted to resolve what we think might be a misunderstanding how our work proceeded in obtaining evidence for generalization breakdown of current SOTA LLMs. We are curious whether it helped to illuminate the raised issues and looking forward any response. Should this discussion be insightful and resolve some of the concerns, please consider reflecting this in the scores.\"}",
"{\"summary\": \"Authors study reasoning capabilities of various SOTA LLMs in a controlled environment, by synthesizing a very simple yet efficient task, Alice in Wonderland (AIW). This task is composed of multiple variations of the following template: \\\"Alice has N brothers and she also has M sisters. How many sisters does Alice\\u2019s brother have?\\\". Authors systematically study more than 20 models by varying M/N, changing family relations, introducing redundant information, and varying between prompt templates. Authors showed that models not only fail on this simple task showing high variation between prompt templates, but also that their failure can not be attributed to arithmetic or commonsense knowledge errors, but occur due to the lack of generalization and basic reasoning abilities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Authors discuss an important problem of LLM reasoning abilities\", \"Paper is clearly written\", \"Proposed evaluation framework is novel and simple to implement and verify\", \"Authors perform extensive experiments across various models and prompt templates\", \"Detailed ablation studies on AIW variations support main claims of the paper\"], \"weaknesses\": [\"Even though authors prove that SOTA LLMs fail on AIW task, I don't think we can claim that they are not capable of robust reasoning. On the contrary, paper shows that LLMs are capable of some types reasoning (like arithmetic, or basic family relations), but fail on the others (logical reasoning).\", \"There are multiple formatting issues in the paper, probably caused by moving text between templates, that hurt overall presentation of the paper (see Questions section for example).\"], \"questions\": \"1. Why do you think Llama-3 performs so much worse then Llama-2 model? What framework did you use to run evaluations for models with open weights? I wonder, if there might be any issues with prompts, as Llama-3 models require different special symbols in chat template than Llama-2.\\n\\n2. There are some issues with citations across the paper where brackets are missing in most of the citations, for ex. in lines 41-42: \\\"...visual recognition Radford et al. (2021) or language understanding Devlin et al.(2018); Raffel et al. (2020); Brown et al. (2020), l...\\\" should be \\\"...visual recognition (Radford et al., 2021) or language understanding (Devlin et al., 2018; Raffel et al., 2020 ...\\\". Multiple periods are missing: lines 111, 117, 192, 309, 458, 460, 463. Chapter 3.1.1 \\\"original. like\\\" -> \\\"original, like\\\" in multiple places.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Rebuttal summary, references\", \"comment\": \"### References\\n\\n[1] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199\\u201322213, 2022.\\n\\n[2] Cohere. Introducing Command R+: A scalable LLM built for business. Apr 2024 https://cohere.com/blog/command-r-plus-microsoft-azure\\n\\n[3] Mosaic. Introducing DBRX: A new state-of-the-art open LLM. March 2024 https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm\"}",
"{\"title\": \"Rebuttal Continuation, 2\", \"comment\": \"(Continuation)\\n\\nWe agree that recent works like Physics of Language Models (which consist of many parts, each dozen pages large) can provide helpful hints how to conduct search for the failure origins, while AIW like problems can serve as tools to detect failures and measure the progress of interventions.\\n\\n> Questions: 1, What specifically do you think can be discovered about LLMs using your research (going beyond that LLMs perform poorly on specific examples, but perform better on others?) \\n\\nOur research gives a measurement tool to test for generalization failures. It gives also the possibility to compare models looking at the distribution of fluctuations (Fig. 2, Fig 6), creating ranking not reflected in standardized benchmarks. Currently, our research refutes the claim of advanced SOTA LLMs to possess robust strong generalization by showing clear lack of robustness on simple AIW problem, where models with robust zero-shot generalization should exhibit only small fluctuations and close to 100% correct response rates across variations. Our research allows in general to refute overblown claims about model capabilities that rely on standardized benchmarks and other benchmarks that claim to measure advanced functions. \\n\\nEg, in case of NuminaMath-7B that was ranked 1st at the recent AIMO competition, solving 29/50 private set problems of olympiad math level, the claim was widely put forward that the model is capable of solving high school olympiad math problems. AIW has average elementary school level and does not require any advanced math knowledge. We tested NuminaMath-7B on AIW and observed a strong collapse of this model on AIW problem, with correct response rates close to 0 across AIW variations 1-4. Using AIW Light control problems, we can also see that NuminaMath-7B can handle all the low level operations and knowledge required to deal with family structure, ruling out that those are the issues. Using the AIW problem setting, we thus can contradict the strong claim of being capable to deal with olympiad level high school math (Fig F, https://hackmd.io/_uploads/SybG2hqz1x.png).\\n\\nWe would like to emphasize again that examples in our study that lead to poor or better performance of tested models are constructed in certain way to probe model sensitivity to variations that keep problem structure and its difficulty unchanged and thus can be used to measure generalization - in contrast to other studies that has entirely different aim, eg performance optimization by prompt tuning.\\n\\n> 2. How could you construct a dataset that measures that specific quality?\\n\\nIn the simplest form, a benchmark for measuring generalization can be constructed relying on already existing AIW problem variations. Using a robustness score computed over the shape of measured fluctuations distribution, models can be ranked in their generalization capability. A larger and more diverse dataset can be created by procedurally generating further AIW versions, where further variations can be introduced, e.g. varying names of entities, relational structure of the problem, and so on. The same evaluation would apply - given common template, models can be evaluated on all possible problem instances, measuring distribution of fluctuations. This can be done for problem templates with increasing difficulty level, such that generalization breakdown can be measured dependent also on problem difficulty. \\n\\n> 3. How could you then propose methods to overcome a fundamental problem that you have identified?\\n\\nThere are different directions to improve zero-shot generalization and model robustness that currently are unsatisfactory as evident from measured AIW fluctuations. One direction would be to test whether inference time interventions like various self-verification approaches might improve generalization especially in larger scale models. Another direction would be to fine-tune models on procedurally generated AIW problem-like variations and test on a held out AIW test version set whether generalization improves. Yet another, much more compute expensive direction would be to improve the core model generalization capability by modifying pre-training, eg changing dataset mixture to contain synthetic data generated from various problem templates and their variations.\"}",
"{\"comment\": \"We thank the reviewer for dealing with our work and taking the time to try out the experiments with the AIW problem and test the findings. We appreciate positive feedback emphasizing the surprising nature of the discovery being valuable, the robustness of the findings across various models and well-written text easy to follow. We also appreciate further points raised, which we address in following:\\n\\n> The authors did not try to add illustrations (e.g., k-shot) to mitigate the issue. I tried to add an illustration, but some models still failed. That analysis would add value to the work. While more expensive, fine-tuning a small model would also add value to the consistency of the case study they present.\\n\\nIn our work, we focus on zero-shot generalization as an important mode of foundation/instruction fine-tuned model operation as it reveals a lot about its core capabilities to handle novel scenarios. Therefore, we leave the very interesting questions of either few-shot in-context learning or few-shot fine-tuning for future work. We were though too curious and ran preliminary experiments with few shot in-context learning. There, we were presenting few shots of solved AIW problem variations, posing then an unseen AIW variation as test problem. We observe that models often discover and stick with the \\u201cshortcut\\u201d solution C = M + 1 (M being number of Alice\\u2019s sisters) without learning to extract the true underlying problem structure. This becomes visible when switching in the test problem the question such that correct answer is not anymore obtainable by adding 1. Models often still respond to the altered test question by just adding 1 to the queried variable (Fig. G https://hackmd.io/_uploads/SyvWuj3fkx.png). This preliminary evidence hints that in-context few shot learning may have the same issues with obtaining strong robust generalization as observed for zero-shot case. Furthermore, we think that given the widespread claims of advanced models like GPT-4/4o, Claude 3 Opus to possess strong generalization and reasoning, such an embarrassingly simple problem as AIW should be handled with ease in zero-shot mode, were the claim to be defended.\\n\\n> \\u2026 biggest concern is that the authors tried and reported only one example of failure across multiple models. To make an analogy with the adversarial robustness literature, this is equivalent to finding a single adversarial example in computer vision that makes most models misclassify an input (an example of a \\u2018universal trigger\\u2019). \\n\\nWe think that the analogy with the adversarial examples is correct only in one sense - adversarial examples also reveal lack of model robustness to variations that should not affect model function/performance, pointing to generalization deficits. However, our work is not of an adversarial kind. We do not start from a simple, solvable problem and probe various tweaks that do not correspond to \\u201cnatural\\u201d problem variations to search for a problem input alteration that breaks the model. On the contrary, variations introduced into AIW problem template are simple, \\u201cnatural\\u201d variations entirely in the sense of the problem structure, corresponding to instantiation of various numbers which leave problem structure and difficulty unchanged (Fig. A https://hackmd.io/_uploads/HJULH2IG1l.png). Those naturally generated problem instances do not contain any backdoors or tweaks aiming to trigger model glitches. We additionally make use of AIW Light control experiments to make sure that observed breakdowns and strong fluctuations do not origin in problem formulation specific issues, eg failing to parse language/numbers or to execute required low-level operations or access specific knowledge necessary for problem solution (Fig. E https://hackmd.io/_uploads/ByCpjM9M1x.png; Fig 3,4,5). Thus, our approach is almost the opposite of adversarial, as we attempt to make sure that problem formulations and variations are actually easy for the models to handle.\\n\\nFurther, to confirm that same observations hold for other simple problems of related kind, we conducted experiments with various AIW versions. For instance, we introduce Alice and Bob as two entities in the problem structure to deal with, or we replace brothers and sisters entities with male/female friends, abandoning family specific frame. Using same experimental procedure to create variations of these problem versions, we observe same pattern as for AIW original, especially the strong fluctuations across variations, confirming existence of same generalization deficits for further problem examples (Fig. I https://hackmd.io/_uploads/BJ1nqj3MJx.png)\"}",
"{\"comment\": \"We would also like to note that following reviewers\\u2019 requests, we conducted further experiments with problems similar to AIW, to obtain further confirmation that same behavior can be observed on variety of problems, further substantiating the value of the measurement procedure assessing models\\u2019 generalization, beyond AIW original. We looked into AIW versions with modified problem templates, where we either introduce Alice and Bob as two entities in the problem structure to deal with, or we replace brothers and sisters entities with male/female friends, abandoning family specific frame.\\n\\nUsing same experimental procedure to create variations of these problem versions, we observe the same pattern as for the AIW original, especially the strong fluctuations across variations, confirming the existence of the same generalization deficits using further problem examples (Fig. I https://hackmd.io/_uploads/BJ1nqj3MJx.png) This paves the road for benchmark construction in follow up work.\"}",
"{\"title\": \"Rebuttal Continuation, 2\", \"comment\": \"One main contribution of ours to existing discussion on whether LLM can reason or generalize is thus novel experimental method to detect generalization breakdown by measuring model sensitivity to problem irrelevant variations in a fixed, minimalistic problem template. This provides a tool to properly check whether a given model can indeed generalize well or whether claims are overblown by relying on high scores of standardized benchmarks that obviously do not manage to reflect such deficits properly (see Fig. 7, Suppl. Fig. 16, 17, 18, 19 for bench failures).\\n\\nAnother contribution is providing clear evidence that also most advanced SOTA LLMs do not stand this test for robust zero shot generalization. This sends a strong warning to technological community to put too much trust in existing benchmarks and to put models exhibiting high scores on those benchs to end applications that require robust generalization and reasoning under various conditions. It is clear that if strong fluctuations and sensitivity is exhibited even on such a simple problem as AIW, this will be only one problem of many that leads to strong sensitivity and lack of robustness to problem variations, and more complex problems will expectedly lead to even stronger lack of robustness. Our research shows that installing robust generalization is still an open question for basic research, and provides a guide for constructing simple, well reproducible measurement devices to probe generalization, paving the road for benchmarks that do measure this core property correctly and allow for measurable progress towards models with stronger generalization.\\n\\n> For figure 1, what do the numbers like 55, 56, 63, 69 mean?\\n\\nFor figures that show correct response rates across AIW variations 1-4, the numbers in the legend mean the prompt IDs used in the experiments, for better reproducibility. Specifically for Fig. 1 inlay, correct response rates for each AIW variation 1-4 on STANDARD prompt type are shown, corresponding to prompt IDs 55, 56, 63, 69. Prompts are available at https://anonymous.4open.science/r/AITW_anonymous-69A6/prompts/prompts.json, collected data https://anonymous.4open.science/r/AITW_anonymous-69A6/collected_responses/raw_data_inspection/AIW_AIW_plus.json\"}"
]
} |
EJfLvrzh2Q | Rethinking Self-Distillation: Label Averaging and Enhanced Soft Label Refinement with Partial Labels | [
"Hyeonsu Jeong",
"Hye Won Chung"
] | We investigate the mechanisms of self-distillation in multi-class classification, particularly in the context of linear probing with fixed feature extractors where traditional feature learning explanations do not apply. Our theoretical analysis reveals that multi-round self-distillation effectively performs label averaging among instances with high feature correlations, governed by the eigenvectors of the Gram matrix derived from input features. This process leads to clustered predictions and improved generalization, mitigating the impact of label noise by reducing the model's reliance on potentially corrupted labels. We establish conditions under which multi-round self-distillation achieves 100\% population accuracy despite label noise. Furthermore, we introduce a novel, efficient single-round self-distillation method using refined partial labels from the teacher's top two softmax outputs, referred to as the PLL student model. This approach replicates the benefits of multi-round distillation in a single round, achieving comparable or superior performance--especially in high-noise scenarios--while significantly reducing computational cost. | [
"self-distillation",
"partial label learning",
"label noise correction",
"training with soft labels",
"multi-class classification"
] | Accept (Poster) | https://openreview.net/pdf?id=EJfLvrzh2Q | https://openreview.net/forum?id=EJfLvrzh2Q | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yydqhii0m1",
"sQQ1buLrOI",
"qQmV9jelSW",
"jCB7gLzEWT",
"ienyL9sOMk",
"eIrm9THUln",
"X94K5bONPT",
"WPfDd0FF0o",
"TO9MF2nwKD",
"RhaDI0KeOt",
"RI7bCoII0U",
"PewkUkvDJW",
"OgJrfHX1s5",
"CKoPz6PUFt",
"B1rurPyeDj",
"59ithvq3ZD",
"4N94TdySLJ",
"2QIwx1wv0K",
"0YwlBIMd5e"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732516053207,
1732331780153,
1732519587612,
1731912742721,
1734764992175,
1730210249005,
1731912915093,
1731912696862,
1730982745796,
1731912631978,
1731912850167,
1731912791939,
1730711416772,
1737523617932,
1731912955009,
1732167252141,
1730536493587,
1731912997297,
1732620727603
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission4076/Reviewer_g4HH"
],
[
"ICLR.cc/2025/Conference/Submission4076/Reviewer_XuSH"
],
[
"ICLR.cc/2025/Conference/Submission4076/Reviewer_CZ26"
],
[
"ICLR.cc/2025/Conference/Submission4076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4076/Area_Chair_D3Yn"
],
[
"ICLR.cc/2025/Conference/Submission4076/Reviewer_CZ26"
],
[
"ICLR.cc/2025/Conference/Submission4076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4076/Reviewer_UkvB"
],
[
"ICLR.cc/2025/Conference/Submission4076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4076/Reviewer_XuSH"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission4076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4076/Reviewer_g4HH"
],
[
"ICLR.cc/2025/Conference/Submission4076/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4076/Reviewer_UkvB"
]
],
"structured_content_str": [
"{\"comment\": \"I'm satisfied with the author's response and I still vote for acceptance.\"}",
"{\"comment\": \"Thank you for your detailed response. My concerns are totally resolved. I believe the theoretical analysis in the main paper and appendix offers valuable insight into the area of self-distillation and transfer learning. Thus, I decide to raise my score.\"}",
"{\"comment\": \"Thank you for your detailed response and I decide to raise my score.\\nBesides, we suggest that the authors will add a detailed discussion of existing PLL methods in the final version as in the responce.\"}",
"{\"title\": \"Response to Reviewer XuSH (1/2)\", \"comment\": \"We sincerely thank the reviewer for the constructive feedback. Please see our responses to the reviewer\\u2019s questions.\\n\\n>**W1. This paper focuses on self-distillation with linear probing and provides a theoretical analysis in this context. I believe that both self-distillation and linear probing are valuable techniques, but I am unclear about the purpose of combining self-distillation with linear probing. As far as I know, linear probing is widely used in self-supervised learning as a method to evaluate learned features. Why should we combine linear probing with self-distillation, especially in scenarios involving label noise?**\\n\\nCombining self-distillation with linear probing is particularly beneficial when adapting large pre-trained neural networks (foundation models) to downstream tasks. Fine-tuning an entire pre-trained model is computationally intensive and may risk overfitting, especially with limited labeled data or noisy labels. Linear probing, which trains only a linear classifier on top of a fixed feature extractor, offers a more efficient way to utilize the rich features of pre-trained models. While linear probing is efficient, the linear classifier can still overfit to noisy or corrupted labels. Self-distillation improves linear probing by reducing overfitting and enhancing generalization. By training the linear classifier over multiple rounds, self-distillation shifts the model's focus from fitting noisy labels to leveraging feature correlations among instances. This process helps the classifier make more robust predictions based on the underlying data structure rather than the potentially noisy labels. \\n\\nWe also propose a new self-distillation scheme that achieves these benefits in a single round using partial labels from the teacher's top predictions. This approach further reduces computational complexity while maintaining or even improving performance. It is especially useful when computational resources are limited or when quick adaptation to new tasks is required without extensive retraining.\\n\\nIn summary, combining self-distillation with linear probing enhances the effectiveness of linear probing by improving generalization and robustness, especially in domains with noisy labels (e.g., crowdsourced data). This combination is valuable in applications where computational efficiency and model robustness are critical.\"}",
"{\"metareview\": \"This paper provides a theoretical analysis of the mechanisms underlying self-distillation in a linear probing setting. The analysis demonstrates that after several rounds of self-distillation, the model's predictions converge to a weighted average of the provided labels, with the weights determined by the Gram matrix. Leveraging this insight, the authors examine the impact of label noise and evaluate the efficiency of the self-distillation method.\\n\\nExperiments validate the effectiveness of the proposed single-round self-distillation method. A key theoretical contribution of the paper is the interpretation of self-distillation as label averaging among highly correlated instances, offering a fresh perspective on its mechanism even in the absence of feature evolution. Additionally, the authors present numerical and visual analyses of the approximation of the softmax function and the feature correlation matrix, providing reasonable and convincing evidence to support their findings.\", \"additional_comments_on_reviewer_discussion\": \"There are some concerns on experiments and theoretical results and insights. After rebuttal, the authors have addressed the concerns raised by the reviewers. The reviewers have increased the rating accordingly.\"}",
"{\"summary\": \"In this paper, the authors proposed introduce a single-round self-distillation method using refined partial labels from the teacher\\u2019s top two softmax outputs, referred to as the PLL student model. This approach replicates the benefits of multiround distillation in a single round. Experiments on several datasets demonstrate the proposed method is effective.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The analysis of self-distillation in linear probing reveals that self-distillation effectively performs label averaging among instances with high feature correlations when generating predictions on the training data.\", \"weaknesses\": \"Here are a few questions I would like to ask the authors:\\n\\n1. What are the clear insights or significances of Sections 2.2, 3, and 4 in relation to the proposed method in Section 5? Could the authors summarize the roles and conclusions of Sections 2.2, 3, and 4, and their implications for Section 5? These sections are not sufficiently clear.\\n\\n2. In Section 5, the authors state, \\\"selecting the top two labels with the highest values and assigning a weight of 1/2 to each, setting all the other entries to zero.\\\" What is the rationale behind this approach?\\n\\n3. The experiments use ResNet34 as the backbone for linear probe experiments; however, training ResNet34 with all parameters is not particularly time-consuming or resource-intensive. Should a backbone with a larger parameter count be chosen for the experiments?\\n\\n4. When comparing existing PLL methods, the authors claim, \\\"Our method differs by directly employing the refined partial labels derived from the teacher\\u2019s outputs, achieving the same benefits as multi-round distillation in just one round.\\\" Does this imply that other methods incorporating the teacher's output could achieve similar or even better results? Are there corresponding experiments to support this?\", \"questions\": \"Please see the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer CZ26 (1/3)\", \"comment\": \"We sincerely thank the reviewer for the constructive feedback. Please see our responses to the reviewer\\u2019s questions.\\n\\n>**W1. What are the clear insights or significance of Sections 2.2, 3, and 4 in relation to the proposed method in Section 5? Could the authors summarize the roles and conclusions of Sections 2.2, 3, and 4, and their implications for Section 5? These sections are not sufficiently clear.**\\n\\n>**W2. In Section 5, the authors state, \\\"selecting the top two labels with the highest values and assigning a weight of 1/2 to each, setting all the other entries to zero.\\\" What is the rationale behind this approach?**\\n\\nSections 2.2, 3, and 4 are crucial for understanding the development and significance of the proposed method in Section 5. We introduced the PLL (Partial Label Learning) student model in Section 5 based on the intuition gained from our closed-form analysis of multi-round self-distillation effects in Section 3 and the resulting robustness to label noise observed in Section 4. Specifically, the PLL student model is designed to replicate the benefits of multi-round self-distillation\\u2014particularly its robustness to label noise\\u2014in a single round.\\n\\nIn Section 3, we analyzed the effect of multi-round self-distillation and provided a quantified closed-form solution for the output of the $t$-th student model. This analysis revealed that multi-round self-distillation induces label averaging effects among instances with high feature correlations when generating predictions on the training data. This means that as the number of distillation rounds increases, the model's output predictions gradually form clusters corresponding to each ground-truth label based on input feature correlations. This clustering effect is illustrated in Figure 2, where the outputs become more similar for instances sharing the same true label as $t$ increases.\\n\\nIn Section 4, we explored how this clustering effect enhances robustness to label noise. The label averaging makes the model's predictions less overfitted to the given (possibly noisy) labels and more influenced by the average labels among highly correlated instances. Recognizing that each distillation round gradually increases the weight on the average labels while decreasing the weight on the individual labels, we analyzed how many distillation rounds are sufficient for the student model to make correct predictions for all training instances, including those with noisy labels. This is summarized in Theorem 4.1, which captures the relationship between label corruption rates and the effectiveness of the label averaging process over multiple distillation rounds. \\n\\nBuilding on these insights, the PLL student model proposed in Section 5 aims to replicate the behavior of multi-round self-distillation through refinement of the teacher's output and a single step of self-distillation. The label averaging effect from multi-round self-distillation shifts the softmax output away from a one-hot vector of the given label toward the average labels among instances sharing the same ground-truth label. For clean samples, this averaging effect slightly reduces the confidence in the correct label but maintains the correct prediction as $t$ increases. For noisy samples, it decreases the confidence in the incorrect (noisy) label and moves the prediction toward the correct label.\\n\\nTo facilitate this process efficiently, our PLL student model refines the teacher's output by selecting the top two labels with the highest values, assigning a weight of 1/2 to each, and setting all other entries to zero. Under mild conditions on the label corruption matrix\\u2014as stated in Theorem 5.1\\u2014we show that the top-two list obtained from the teacher's output always includes the ground-truth label: it ranks first for clean samples and second for noisy samples. Assigning equal weights to the top two labels balances the influence of the teacher's confidence and mitigates overconfidence in potentially incorrect labels due to noise.\\n\\nUsing these refined targets (two-hot vectors), a single round of self-distillation\\u2014through label averaging among highly correlated inputs\\u2014ensures that the model's softmax outputs assign the highest probability to the ground-truth label for all training instances, achieving 100% accuracy. This is possible because the average of the two-hot vectors from instances of the same ground-truth class has its highest value at the ground-truth label, since the two-hot vectors always include the true label for both clean and noisy samples.\\n\\nThus, the PLL student model effectively replicates the advantages of multi-round self-distillation in just a single round. This approach not only enhances computational efficiency but also maintains or improves performance, especially in high label-noise regimes.\"}",
"{\"title\": \"Response to Reviewer UkvB (2/2)\", \"comment\": \">**Q2. Could you explain the intuition behind the condition (Eq. 10) in Theorem 4.1? At first glance, it seems almost implausible to reduce training losses to zero in the presence of label noise, even with multiple rounds of self-distillation.**\\n\\n(Eq. 10) in Theorem 4.1 provides a condition under which the $t$-th self-distilled model can achieve 100% population accuracy, even in the presence of label noise in the training dataset. The key intuition behind this condition lies in the label averaging effect of multi-round self-distillation.\\n\\nEach round of self-distillation causes the model's output predictions to gradually shift away from the one-hot encoded given labels\\u2014which may be noisy\\u2014toward the average label vectors of instances with high feature correlations (i.e., instances from the same ground-truth class). This process is described in (Eq. 9) of Theorem 3.1. Essentially, at each distillation round, the student model is trained to fit new targets that increasingly reflect the collective information from highly correlated instances, rather than fitting the noisy labels. As the number of distillation rounds $t$ increases, the influence of individual noisy labels diminishes, and the student model relies more on the averaged labels derived from other instances of the same class. This shift reduces the impact of label noise because the averaging process amplifies the true signal (correct labels) while diluting the noise (incorrect labels).\\n\\n(Eq. 10) specifies the sufficient number of distillation rounds $t$ needed for this label averaging effect to ensure that the model's softmax outputs assign the highest probability to the ground-truth label for all training instances\\u2014including those with noisy labels.\\nTo understand the condition in (Eq. 10) more intuitively, consider (Eq. 13), which shows that the output prediction for a particular training instance can be expressed as a weighted combination of: 1) The given (possibly noisy) label, 2) The average label vector among instances from the same ground-truth class, 3) The average label vector among instances from the same superclass, 4) The uniform distribution vector.\\n\\nThe weights assigned to these components depend on the number of distillation rounds $t$ and shift away from the individual noisy label toward the average label as $t$ increases. As long as the proportion of correctly labeled instances in each class is higher than the proportion of mislabeled ones (i.e., $[C]\\\\_{k,k}>[C]\\\\_{k,k\\u2019}$ for all $k\\\\neq k\\u2019$), the average label vector will have its highest value at the ground-truth label position. Therefore, after sufficient distillation rounds, the model's predictions will correctly identify the ground-truth label, even for samples that were initially mislabeled.\\n\\nIn summary, the condition in (Eq. 10) captures the relationship between the label corruption rates and the effectiveness of the label averaging process over multiple distillation rounds. It provides the minimum number of rounds needed for the model to overcome label noise by leveraging the collective information from correlated instances, ultimately achieving perfect accuracy despite the presence of noisy labels. We will add this high-level intuition in the main text. \\n\\n>**Q3. It seems that the terms \\u201ctrue label\\u201d and \\u201cgiven label\\u201d are not properly defined. As I understand it, the \\u201cgiven label\\u201d refers to the target label, which may include noise, while the \\u201ctrue label\\u201d represents the oracle.**\\n\\nIn our paper, the true label $y(\\\\mathbf{x})$ refers to the ground-truth class of each sample $\\\\mathbf{x}$. This represents the actual class to which the sample inherently belongs, as determined by the underlying data distribution. The given label $\\\\hat{y}$ is the label provided during training. This is the target label used by the teacher model, and it may include label noise. We will revise our paper to clarify these terms.\"}",
"{\"summary\": \"This work aims to analyze self-distillation in linear probing with neural network feature extractors. The contributions of this work are threefold: (1) the analysis reveals that self-distillation effectively performs label averaging among instances with highly correlated features when generating predictions on the training data; (2) the analysis quantifies the number of distillation rounds needed to achieve 100% population accuracy in the presence of label noise; and (3) based on the theoretical analysis, the authors introduce a novel self-distillation approach that achievers similar benefits of multi-round distillation in a single round.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Solid Presentation\\nOverall, this manuscript is well-written and polished. The introduction provides a satisfying overview of the rich theoretical analysis, which is a strong aspect of the work. Some parts of the analysis are somewhat difficult to follow, but experts in this specific field likely won\\u2019t struggle to understand the details.\\n\\n2. Interesting theoretical analysis\\nOne of the main theoretical contributions of this paper is demonstrating that the effect of self-distillation can be interpreted as label averaging among highly correlated instances. This interpretation offers a new perspective on the mechanism of self-distillation, even in the absence of feature evolution. Initially, I thought this was merely a special case of Allen-Zhu and Li\\u2019s study. However, after reviewing previous work, I discovered a notable difference in how the effects of self-distillation are analyzed, which I believe is worth sharing with the community.\\n\\n3. A New Approach to Converting Multi-Round Self-Distillation into a Single Round\\nThe proposed method takes a different approach by directly using the refined partial labels derived from the teacher's outputs, achieving the same benefits as multi-round distillation in just one round. This is especially appealing, as one of the main drawbacks of self-distillation is the need for multiple rounds, which makes training very inefficient. A single-shot method is highly desirable, and we hope this approach will be widely adopted for training classifiers in noisy label settings.\", \"weaknesses\": \"1. Most sections are well-written, but some parts would benefit from clearer descriptions or revision. Please refer to the section below.\\n\\n2. The experiments are somewhat weak, but this is understandable since this work is more theoretical.\", \"questions\": \"1. The second contribution of this work is to determine the number of distillation rounds required to achieve 100% population accuracy in the presence of label noise. On first reading, the term \\\"100% population accuracy\\\" may not be immediately clear. Introducing a brief explanation of this concept in the first place would be beneficial for readers who may be less familiar with the subject.\\n\\n2. Could you explain the intuition behind the condition (Eq. 10) in Theorem 4.1? At first glance, it seems almost implausible to reduce training losses to zero in the presence of label noise, even with multiple rounds of self-distillation. \\n\\n3. It seems that the terms \\u201ctrue label\\u201d and \\u201cgiven label\\u201d are not properly defined. As I understand it, the \\u201cgiven label\\u201d refers to the target label, which may include noise, while the \\u201ctrue label\\u201d represents the oracle.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer UkvB (1/2)\", \"comment\": \"We sincerely thank the reviewer for the constructive feedback. Please see our responses to the reviewer\\u2019s questions.\\n\\n>**Q1. The second contribution of this work is to determine the number of distillation rounds required to achieve 100% population accuracy in the presence of label noise. On first reading, the term \\\"100% population accuracy\\\" may not be immediately clear. Introducing a brief explanation of this concept in the first place would be beneficial for readers who may be less familiar with the subject.**\\n\\nThank you for your feedback. We will clarify the term \\u201c100% population accuracy\\u201d where it first appears in the manuscript. In our problem setup, we consider a ground-truth distribution of input-label pairs $(\\\\mathbf{x},y(\\\\mathbf{x}))\\\\sim\\\\mathcal{P}$. The population accuracy of a classifier $\\\\boldsymbol{\\\\theta}\\\\in\\\\mathbb{R}^{d\\\\times K}$ is defined as $\\\\mathbb{E}\\\\_{(\\\\mathbf{x},y(\\\\mathbf{x})) \\\\sim\\\\mathcal{P}}\\\\left[\\\\mathbb{1}\\\\left(\\\\arg\\\\max\\\\_{k\\\\in[K]}[\\\\sigma(\\\\boldsymbol{\\\\theta}^{\\\\top} \\\\phi(\\\\mathbf{x}))]\\\\_k =y(\\\\mathbf{x})\\\\right)\\\\right],$ i.e., the probability that the classifier (the softmax output) correctly predicts the ground-truth label $y$ for an input $\\\\mathbf{x}$, drawn from $\\\\mathcal{P}$. \\n\\nAchieving 100% population accuracy means that the classifier correctly identifies the ground-truth class for all possible inputs from the distribution $\\\\mathcal{P}$. In other words, despite the presence of label noise in the training data, the classifier perfectly generalizes to the true underlying data distribution. Assuming a sufficiently large number $n$ of training instances per class, this indicates that the trained model can correctly classify the (potentially noisy) training instances into their true ground-truth classes rather than overfitting to the given noisy labels. We will incorporate this explanation into the manuscript to ensure clarity for all readers.\"}",
"{\"title\": \"Response to Reviewer g4HH\", \"comment\": \"We sincerely thank the reviewer for the constructive feedback. Please see our responses to the reviewer\\u2019s questions.\\n\\n>**W1. In the experiments, Generalized Cross Entropy loss is applied for PLL which differs from the setting of CE loss in the main theory. The authors need to explain whether applying GCE loss is fair for comparison or provide experimental results of CE loss for PLL.**\\n\\nThank the reviewer for the insightful comment. We clarify our rationale for using the Generalized Cross Entropy (GCE) loss in our experiments with the PLL student model, and provide additional experimental results using the Cross Entropy (CE) loss below.\\n\\nThe GCE loss is designed to balance the trade-off between the robustness of the Mean Absolute Error (MAE) loss and the fast convergence of the CE loss. In a $K$-class classification problem, for a given data pair $(\\\\boldsymbol{x}\\\\_i, y\\\\_i)$, the CE loss $\\\\mathcal{L}\\\\_\\\\mathsf{CE}$ and the MAE loss $\\\\mathcal{L}\\\\_\\\\mathsf{MAE}$ are defined as \\n$$\\n\\\\mathcal{L}\\\\_\\\\mathsf{CE}(\\\\mathbf{e}(y\\\\_i), f(\\\\boldsymbol{x}\\\\_i;\\\\theta)) = -\\\\sum\\\\_{k=1}^K [\\\\mathbf{e}(y\\\\_i)]\\\\_k \\\\log[f(\\\\boldsymbol{x}\\\\_i;\\\\theta)]\\\\_k, \\\\quad \\\\mathcal{L}\\\\_\\\\mathsf{MAE}(\\\\mathbf{e}(y\\\\_i), f(\\\\boldsymbol{x}\\\\_i;\\\\theta)) = \\\\sum\\\\_{k=1}^K |[\\\\mathbf{e}(y_i)]\\\\_k - [f(\\\\boldsymbol{x}\\\\_i;\\\\theta)]\\\\_k|.\\n$$ The gradients for each loss are given as \\n$$\\n\\\\nabla\\\\_\\\\theta \\\\mathcal{L}\\\\_\\\\mathsf{CE}(\\\\mathbf{e}(y\\\\_i), f(\\\\boldsymbol{x}\\\\_i;\\\\theta)) = -\\\\frac{1}{[f(\\\\boldsymbol{x}\\\\_i;\\\\theta)]\\\\_{y\\\\_i}} \\\\nabla\\\\_\\\\theta [f(\\\\boldsymbol{x}\\\\_i;\\\\theta)]\\\\_{y\\\\_i}, \\\\quad \\\\nabla\\\\_\\\\theta \\\\mathcal{L}\\\\_\\\\mathsf{MAE}(\\\\mathbf{e}(y\\\\_i), f(\\\\boldsymbol{x}\\\\_i;\\\\theta)) = -2\\\\nabla\\\\_\\\\theta [f(\\\\boldsymbol{x}\\\\_i;\\\\theta)]\\\\_{y\\\\_i}.\\n$$CE loss tends to converge quickly but can overfit to noisy labels, while MAE loss is robust to label noise but converges slowly.\\n\\nThe GCE loss introduces a hyperparameter $q$ to interpolate between CE and MAE losses:\\n$$\\\\mathcal{L}\\\\_\\\\mathsf{GCE}(\\\\mathbf{e}(y\\\\_i), f(\\\\boldsymbol{x}\\\\_i;\\\\theta)) = \\\\frac{1 - ([f(\\\\boldsymbol{x}\\\\_i;\\\\theta)]\\\\_{y_i})^q}{q},\\n$$with gradient:\\n$$\\n\\\\nabla_\\\\theta \\\\mathcal{L}\\\\_\\\\mathsf{GCE}(\\\\mathbf{e}(y\\\\_i), f(\\\\boldsymbol{x}\\\\_i;\\\\theta)) = -[f(\\\\boldsymbol{x}\\\\_i;\\\\theta)]\\\\_{y\\\\_i}^{q-1} \\\\nabla\\\\_\\\\theta [f(\\\\boldsymbol{x}\\\\_i;\\\\theta)]\\\\_{y\\\\_i}.\\n$$By adjusting $q$, GCE loss balances robustness to label noise and convergence speed.\\n\\nIn our experiments, we observed that using CE loss with the PLL student model often led to instability during training. The PLL student model trains with a set of candidate labels for each sample with equal weights\\u2014in our case, the top two labels with weights of $1/2$ each. Using CE loss with equally weighted candidate labels can cause instability because the model may converge incorrectly when the candidate set includes incorrect labels.\\n\\nVarious PLL approaches address this instability by refining the candidate label set during training or adjusting label weights. However, our focus is to demonstrate the effectiveness of one-step self-distillation using PLL without incorporating additional PLL techniques. Therefore, we utilized the GCE loss, which behaves similarly to the CE loss but offers greater stability during training. \\n\\nWe also conducted additional experiments using CE loss with the PLL student model. The test accuracies on the CIFAR-100 dataset under varying label corruption rates ($\\\\eta$) are presented below (corresponding to Figure 4 and Table 5 in the main text):\\n|DistillationStep|0.0|0.1|0.3|0.5|0.6|0.7|0.8|0.9| \\n|----------------|----|----|----|----|----|----|----|----| \\n|1(Teacher)|70.6\\u00b10.1%|65.6\\u00b10.1%|59.6\\u00b10.2%|52.4\\u00b10.2%|46.8\\u00b10.1%|39.6\\u00b10.3%|28.3\\u00b10.3%|13.2\\u00b10.2%| \\n|2|71.7\\u00b10.2%|69.6\\u00b10.1%|66.0\\u00b10.2%|62.1\\u00b10.2%|58.5\\u00b10.6%|53.2\\u00b10.4%|43.1\\u00b10.4%|22.6\\u00b11.2%| \\n|3|**72.1\\u00b10.2%**|69.8\\u00b10.3%|66.3\\u00b10.2%|62.4\\u00b10.1%|58.6\\u00b10.6%|53.4\\u00b10.5%|43.6\\u00b10.3%|23.3\\u00b11.1%| \\n|4|72.0\\u00b10.3%|69.8\\u00b10.3%|66.5\\u00b10.2%|62.4\\u00b10.1%|58.7\\u00b10.6%|53.5\\u00b10.5%|43.7\\u00b10.3%|23.5\\u00b11.2%| \\n|5|72.0\\u00b10.3%|**69.9\\u00b10.3%**|66.5\\u00b10.2%|62.4\\u00b10.1%|58.7\\u00b10.5%|53.6\\u00b10.5%|43.9\\u00b10.3%|23.7\\u00b11.2%| \\n|PLL(GCE)|69.5\\u00b10.2%|67.9\\u00b10.1%|**66.9\\u00b10.2%**|**63.9\\u00b10.3%**|**61.3\\u00b10.4%**|**57.1\\u00b10.1%**|**48.6\\u00b10.9%**|**26.5\\u00b10.7%**| \\n|PLL(CE)|68.7/68.6/68.3%|66.3/66.4/**64.6**%|65.0/65.3/65.6%|62.1/62.8/62.3%|59.6/60.0/60.0%|55.8/55.6/55.0%|47.2/**42.4**/47.5%|**22.8**/25.2/25.8%| \\n\\nAcross three repeated experiments, the PLL student model still generally outperforms the multi-round self-distillation model in high corruption regimes ($\\\\eta\\\\geq 0.6$), even with CE loss. However, we observed greater variability and occasional drops in accuracy when using CE loss. GCE loss provided more consistent and stable performance across different corruption rates. We will include these explanations in the revised manuscript to clarify our experimental setup.\"}",
"{\"title\": \"Response to Reviewer XuSH (2/2)\", \"comment\": \">**W2. The proposed theory assumes a fixed feature extractor and therefore cannot be applied to trainable feature extractors, which are more commonly used in task adaptation and transfer learning.**\\n\\nWhile our main analysis assumes a fixed feature extractor and focuses on training a linear classifier on top of it, our theory can be generalized to scenarios where the feature extractor is updated during self-distillation, as we explain in Appendix C. Our framework decouples the gains from self-distillation into two components: feature learning (which modifies the feature map assumed in Equation (7)) and feature selection (which occurs during training the classifier, i.e., linear probing).\\n\\nTo illustrate this point, consider the quantified gains of self-distillation in label-noise scenarios presented in Theorem 4.1. This theorem provides the sufficient number of distillation rounds required for the student's softmax output to assign the highest value to the ground-truth label for both clean and noisy samples. The condition depends not only on the class-wise label corruption rates but also on the relative gap in feature correlations between samples of the same ground-truth class, parameterized by $c$, and those of different ground-truth classes, parameterized by $d$.\\n\\nWe can generalize our theory by allowing these correlations to evolve over distillation rounds, denoted by $c^{(i)}$ and $d^{(i)}$, due to feature updates during self-distillation. Recent work by Allen-Zhu and Li (2022) demonstrates that self-distillation's effectiveness arises from an implicit ensemble of teacher and student models, enabling the student to learn more diverse features when using the teacher's softmax outputs as targets. This suggests that the intra-class feature correlation $c^{(i)}$ may increase with each distillation round $i$, enhancing the separation between classes.\\n\\nAssuming the class-wise feature map defined in Equation (7), and allowing $c$ and $d$ to change over distillation steps, our parameters $p$ and $q$ in Equation (14), which govern the label averaging effect, also become functions of $i$: $$\\np^{(i)}:=(1-c^{(i)})/(K^2 n\\\\lambda+1-c^{(i)}); \\\\quad q^{(i)}:=(1-c^{(i)}+n(c^{(i)}-d^{(i)}))/(K^2 n\\\\lambda+1-c^{(i)}+n(c^{(i)}-d^{(i)})).\\n$$ \\nUnder the extended class-wise feature correlation assumption, our Theorem 4.1 can be generalized to: \\n\\n**Theorem C.1 (extended version.)** *Under the evolving feature correlation model, the $t$-th distilled model achieves 100% population accuracy if*\\n$$[\\\\mathbf{C}]\\\\_{k, k}>[\\\\mathbf{C}]\\\\_{k, k'}+\\\\frac{1}{\\\\prod\\\\_{i=1}^t ({q^{(i)}}/{p^{(i)}})- 1},\\\\quad\\\\forall k,k'(\\\\neq k)\\\\in[K].$$\\n\\nIf the student model learns more diverse features than the teacher, resulting in an increase of $c^{(i)}$ over distillation rounds, the ratio $q^{(i)}/p^{(i)}$\\nalso increases. This makes it easier for the student model to meet the condition for achieving 100% population accuracy. Consequently, the regime where the student model achieves perfect accuracy expands, leading to performance gains from self-distillation.\\n\\nIn summary, while our primary focus was on understanding self-distillation's benefits with a fixed feature extractor, our analysis can be extended to incorporate feature learning dynamics. By integrating our findings with existing feature learning approaches, we demonstrate that self-distillation enhances performance through both label averaging and the evolution of feature representations. Our extended theory quantifies the gains from both aspects, as shown in Theorem C.1. This extended analysis is detailed in Appendix C of our manuscript.\\n\\n[1] Allen-Zhu, Z. and Li, Y. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. In The Eleventh International Conference on Learning Representations, 2022.\"}",
"{\"summary\": \"This paper presents a theoretical analysis of the mechanisms behind self-distillation in a linear probing setting. The analysis reveals that after $t$ rounds of self-distillation, the model's predictions converge to a weighted average of the provided labels, with the weights determined by the Gram matrix. Building on this finding, the authors investigate the effects of label noise and the efficiency of the self-distillation method. Experiments demonstrate the effectiveness of proposed single-round self-distillation method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is well-written, particularly in Section 2, which formulates self-distillation and provides a clear overview of the results.\", \"This paper presents an interesting result in Theorem 2.1, establishing a connection between the predictions of the $t$-th distilled model and the given (possibly noisy) labels.\"], \"weaknesses\": [\"The main weakness about this paper is the significance of research problem.\", \"This paper focuses on self-distillation with linear probing and provides a theoretical analysis in this context. I believe that both self-distillation and linear probing are valuable techniques, but I am unclear about the purpose of combining self-distillation with linear probing. As far as I know, linear probing is widely used in self-supervised learning as a method to evaluate learned features. Why should we combine linear probing with self-distillation, especially in scenarios involving label noise?\", \"The proposed theory assumes a fixed feature extractor and therefore cannot be applied to trainable feature extractors, which are more commonly used in task adaptation and transfer learning.\"], \"questions\": \"See my questions in weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response to Reviewer CZ26 (2/3)\", \"comment\": \"> **W3. The experiments use ResNet34 as the backbone for linear probe experiments; however, training ResNet34 with all parameters is not particularly time-consuming or resource-intensive. Should a backbone with a larger parameter count be chosen for the experiments?**\\n\\nThank the reviewer for the insightful suggestion. We acknowledge that ResNet34 is not a particularly large model, and training it with all parameters is not highly resource-intensive. However, we chose ResNet34 for our experiments to effectively observe the progressive gains of multi-round self-distillation.\\n\\nTo address the reviewer\\u2019s concern, we also conducted additional experiments using a larger backbone\\u2014a pretrained ViT-B (Vision Transformer) model\\u2014as a fixed feature extractor. The test accuracies (%) on the CIFAR-100 dataset across different label corruption rates are presented in the table below:\\n\\n|DistillationStep|0.0|0.1|0.3|0.5|0.6|0.7|0.8|0.9|\\n|----------------|----|----|----|----|----|----|----|----|\\n|1(Teacher)|79.21\\u00b10.02%|74.26\\u00b10.24%|67.14\\u00b10.06%|59.42\\u00b10.40%|54.32\\u00b10.34%|46.00\\u00b10.46%|34.55\\u00b10.28%|17.19\\u00b10.50%|\\n|2|79.99\\u00b10.10%|74.77\\u00b10.28%|72.95\\u00b13.06%|71.29\\u00b10.64%|68.94\\u00b10.49%|64.40\\u00b10.56%|55.53\\u00b10.98%|33.43\\u00b10.56%|\\n|3|80.24\\u00b10.12%|74.89\\u00b10.31%|73.28\\u00b13.18%|71.76\\u00b10.34%|69.22\\u00b10.42%|65.16\\u00b10.61%|56.88\\u00b11.17%|36.10\\u00b10.80%|\\n|4|80.28\\u00b10.13%|**74.91\\u00b10.28%**|73.34\\u00b13.17%|**71.87\\u00b10.41%**|69.32\\u00b10.40%|65.41\\u00b10.58%|57.26\\u00b11.29%|36.84\\u00b10.98%|\\n|5|**80.29\\u00b10.16%**|74.90\\u00b10.27%|**73.40\\u00b13.11%**|71.86\\u00b10.42%|69.37\\u00b10.41%|65.54\\u00b10.63%|57.44\\u00b11.27%|**37.22\\u00b11.11%**|\\n|PLL|77.48\\u00b10.13%|73.78\\u00b10.50%|72.69\\u00b12.82%|71.56\\u00b10.15%|**69.76\\u00b10.45%**|**66.36\\u00b10.29%**|**58.42\\u00b10.58%**|36.04\\u00b11.46%|\\n\\nUsing a larger backbone like ViT-B enhances feature extraction capabilities, resulting in a greater disparity between intra-class and inter-class feature correlations. For example, when calculating feature correlations on the CIFAR-100 dataset using the pretrained ViT-B model, we observed that the average intra-class feature correlation increased to 0.35, compared to 0.25 with ResNet34. This higher intra-class correlation amplifies the clustering effect of self-distillation on model predictions, allowing significant performance improvements to be achieved with fewer distillation steps, as implied by our Theorem 4.1.\\n\\nIn experiments with the larger backbone (ViT-B), we found that most of the distillation gains occur within the first few rounds. As shown in the table above, nearly all performance improvements are observed in the first and second distillation steps when using ViT-B, although additional distillation steps still bring slight gains in high noise rate regimes. The PLL student model also effectively achieves the gains of multi-round self-distillation in a single round in high label-noise regimes for the ViT-B backbone. These additional experiments confirm that our approach is effective with larger models, further validating the versatility and robustness of our method.\"}",
"{\"title\": \"The revised paper is uploaded\", \"comment\": [\"We sincerely thank the reviewers for their constructive feedback. Based on their comments, we have revised our paper accordingly, marking the changes in blue. The modifications are as follows:\", \"**(Sec. 1 L77-78, Sec. 2 L246-250)**: Clarified the explanation regarding 100% population accuracy. (Reviewer UkvB, Q1)\", \"**(Sec. 2 L130-131, 134-135 / App.J.1 L1578, L1581, 1682 / App.K L1866, 1885, 1922)**: Clearly defined \\u201cground-truth label\\u201d and \\u201cgiven label\\u201d, and replaced \\\"provided label\\\" with \\\"given label\\\" throughout the manuscript. (Reviewer UkvB, Q3)\", \"**(App.C)**: Elaborated on extending our work to more general settings of self-distillation, involving trainable feature extractors during multi-round self-distillations. (Reviewer XuSH, W2)\", \"**(App.E.3)**: Provided a more detailed explanation for the use of the GCE loss in our experiments with the PLL student model. (Reviewer g4HH, W1)\", \"**(App.H.2)**: Added experiments using a larger feature extractor backbone (ViT-B) to the ablation section. (Reviewer CZ26, W3)\", \"We have also provided detailed responses to all the reviewers' questions. We hope that these responses address the reviewer\\u2019s concerns and questions.\"]}",
"{\"summary\": \"This paper examines the multi-round self-distillation for multi-class classification in the context of linear probing. By approximating the softmax function as a linear function and considering the feature correlation matrix as a low-rank structure, this paper derives a quantified closed-form solution for the output of the $t$-th student model, showing the effect of self-distillation can be interpreted as label averaging among highly correlated instances. The authors then derive the conditions of the label corruption matrix achieving 100% population accuracy for multi-round self-distillation, in the context of balance and superclass corruption. The authors further prove that distilling teacher's top-2 outputs enjoys better theoretical properties. Extensive experiments are conducted and show consistency with the proposed theories.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed theory is solid and valuable, especially providing insight into the effectiveness of distilling with partial labels.\", \"The authors provide numerical and visual analysis of the approximation of the softmax function and the feature correlation matrix, which is reasonable and convincing.\", \"This paper is well-organized and clearly written.\"], \"weaknesses\": \"In the experiments, Generalized Cross Entropy loss is applied for PLL which differs from the setting of CE loss in the main theory. The authors need to explain whether applying GCE loss is fair for comparison or provide experimental results of CE loss for PLL.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer CZ26 (3/3)\", \"comment\": \">**W4. When comparing existing PLL methods, the authors claim, \\\"Our method differs by directly employing the refined partial labels derived from the teacher\\u2019s outputs, achieving the same benefits as multi-round distillation in just one round.\\\" Does this imply that other methods incorporating the teacher's output could achieve similar or even better results? Are there corresponding experiments to support this?**\\n\\nPartial Label Learning (PLL) is a type of weakly supervised learning where training instances are annotated with a set of candidate labels rather than a single ground-truth label. The goal of PLL is to train a model capable of predicting the ground-truth label for unseen data using this partially labeled dataset. Some existing PLL methods incorporate a teacher-student framework, leveraging the teacher model to refine or guide the candidate label distribution to improve the student model's ability to generalize [1, 2, 3, 4, 5, 6, 7]. For instance, the teacher model may provide soft pseudo-labels or confidence scores to weight the candidate labels, enabling the student model to focus more on the most plausible label. This interaction helps mitigate ambiguity in candidate sets by progressively narrowing down the label distribution during training, ultimately enhancing the accuracy and robustness of the PLL framework. \\n\\nOur method differs from these existing PLL approaches in a key aspect. While traditional PLL studies focus on utilizing the given partially labeled dataset to train a model that predicts the ground-truth label precisely, our research emphasizes **constructing the candidate label set itself from the teacher\\u2019s output**, particularly in the context of noisy supervised training. We demonstrate that a noisy supervised training problem can be reformulated as a PLL problem by leveraging a pretrained feature extractor and linear probing.\\n\\nIn our experiments (as shown in Figure 7 in Appendix D.2), we observe that features extracted by a pretrained ResNet34 exhibit high feature correlation among instances of the same class and low feature correlation between instances of different classes. In Section 3, we show that the principle behind self-distillation lies in prediction averaging based on feature correlation. Thus, even for noisy instances\\u2014where the ground-truth label differs from the given label\\u2014the outputs are influenced by the average predictions of other instances within the same ground-truth class. This effect causes the output probability at the ground-truth label position to become larger, significantly increasing the likelihood that the ground-truth label is included in the size-2 candidate sets derived from the teacher's outputs.\\n\\nBy directly employing the refined partial labels from the teacher's outputs, our method effectively achieves the benefits of multi-round self-distillation in just one round. This approach is specifically designed to correct label noise by constructing candidate label sets that are more likely to include the true label, without relying on additional PLL techniques that adjust the candidate labels during training.\\n\\nRegarding the reviewer\\u2019s question, while other PLL methods incorporating the teacher's output aim to improve performance by refining the candidate label distribution, they typically focus on disambiguating given candidate labels rather than constructing the candidate set itself from the teacher's predictions. Our approach is unique in that it leverages the teacher's outputs to build the candidate label sets, directly addressing the issue of label noise in supervised learning.\\n\\nWe did not conduct experiments comparing our method with other PLL approaches that utilize the teacher's output because the objectives and problem settings differ. Our research contributes a novel perspective by demonstrating how to construct candidate label sets from the teacher's outputs to enhance robustness to label noise in linear probing with self-distillation.\\n\\n[1] Xia, Shiyu, et al. \\\"Towards effective visual representations for partial-label learning.\\\" *CVPR 2023*. \\n\\n[2] Li, Beibei, et al. \\\"AsyCo: An Asymmetric Dual-task Co-training Model for Partial-label Learning.\\\" *arXiv preprint, 2024*. \\n\\n[3] Xu, Ning, et al. \\\"Aligned Objective for Soft-Pseudo-Label Generation in Supervised Learning.\\\" *ICML 2024*. \\n\\n[4] Wang, Guangtai, et al. \\\"Dealing with partial labels by knowledge distillation.\\\" *Pattern Recognition, 2025*. \\n\\n[5] Wang, Haobo, et al. \\\"Pico: Contrastive label disambiguation for partial label learning.\\\" *ICLR 2022*. \\n\\n[6] Wu, Dong-Dong, Deng-Bao Wang, and Min-Ling Zhang. \\\"Distilling Reliable Knowledge for Instance-Dependent Partial Label Learning.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 14. 2024.\\n\\n[7] Li, Wei, et al. \\\"Generalized Contrastive Partial Label Learning for Cross-Subject EEG-Based Emotion Recognition.\\\" *IEEE Transactions on Instrumentation and Measurement, 2024*.\"}",
"{\"comment\": \"Thank you for your detailed response. My original concern has been resolved. I believe this paper meets the acceptance standards of ICLR.\"}"
]
} |
EJTeOf8iG0 | EEEC: Emotion-Experiencer-Event-Cause multi-step chain reasoning for Emotion-Cause Pair Extraction | [
"Xue Gu",
"Ziyao Meng",
"Tiago Gomes",
"Adriano Jose Tavares",
"Hao Xu"
] | Emotion-cause pair extraction (ECPE) aims to identify all emotion and cause clauses in documents, forming the ECPs. Although existing methods have achieved some success, they face issues such as overlooking the impact of emotion experiencers, failing to leverage specific domain knowledge, and tending to spurious correlations. To address these issues, we transform the ECPE task into a multi-step reasoning problem and propose the Emotion-Experience-Event-Cause (EEEC) framework. We introduce an experiencer identification task to understand the source of emotions and enhance the association between emotion and cause clauses. In addition, by combining both prior knowledge and induced reasoning, EEEC guides a large-scale language model (LLM) to perform the emotion-reason pair extraction task efficiently. Experimental results demonstrate that EEEC achieves performance close to current state-of-the-art supervised fine-tuning methods. The data and code are released at https://anonymous.4open.science/r/EEEC-EB80/. | [
"Experiencer; Event; Multi-step chain reasoning; Emotion-Cause Pair Extraction"
] | https://openreview.net/pdf?id=EJTeOf8iG0 | https://openreview.net/forum?id=EJTeOf8iG0 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"OKB1Dt39Cq",
"NRzPlFWKMG",
"HStjLwPLWe",
"A5WIWdTJxh",
"8hnj1lF4mm"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1729152856772,
1730695460262,
1731611936351,
1730651992683,
1731122750031
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10933/Reviewer_483Q"
],
[
"ICLR.cc/2025/Conference/Submission10933/Reviewer_raHV"
],
[
"ICLR.cc/2025/Conference/Submission10933/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10933/Reviewer_eXEt"
],
[
"ICLR.cc/2025/Conference/Submission10933/Reviewer_21YB"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes the Emotion-Experience-Event-Cause (EEEC) framework for the emotion-cause pair extraction (ECPE) task. Specifically, the EEEC framework includes three steps: Knowledge-guided Emotion Extraction, Experiencer & Event Extraction, and Cause Extraction. Experimental results show its effectiveness under the zero-shot setting.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The overall framework is clear.\\n2. The experimental results are good.\", \"weaknesses\": \"1. The presentation of some details is poor. (1) Please consider using the same documents in Figures 1 and 2 to aid understanding. (2) I suggest polishing Experiencer & Event Extraction and Cause Extraction in Figure 2. (3) KC in algorithm 1 seems to be key words set.\\n2. This paper uses a different base compared to GPT3.5 prompt and GPT3.5 DECC, which may affect the evaluation.\\n3. Where is w/o step1 in Table 3? Will directly extracting emotion clauses using LLMs result in poor performance in extracting emotion clauses?\\n4. The novelty and transferability of this paper appear to be average. For example, experiencer and event extraction, and analysis and validate.\", \"questions\": \"1. On which dataset are the results in Table 2? Does this indicate that the proposed method performs better on many pairs of documents and worse on a single pair?\\n2. Why does the proposed method perform better on the English dataset than the Chinese dataset compared to other baselines?\\n3. Please consider changing the form of the reference in Section baselines.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"## Summary:\\nThis paper introduces a multi-step reasoning approach for the problem of Emotion Cause Pair Extraction (ECPE). It breaks the problem down into a five step process as part of the proposed Emotion-Experience-Event-Cause (EEEC) framework: 1/ identifying emotion clauses guided by word level scores passed to the LLM, 2/ extracting experiencers for these emotions, 3/ extracting events to provide the right context 4/ clause extraction using step-by-step LLM chain-of-thought and reflection where the LLM itself validates the final results. \\n\\nFor 1, the paper uses a rule-based sentiment polarity analysis method that gives sentiment scores to each word. These word level scores are aggregated per sentence for the final emotion scores. These are passed to the LLM in Step 1, in addition to keyword detection and scoring. For 2 and 3, they prompt the LLM to detect the spans in the text that are experiencers and events respectively for the emotion clauses found in step 1. Next, in step 4, the LLM is asked to use everything it has found so far and analyze each clause for being a valid cause. Finally, in step 5, the LLM is asked to reflect on its own result so far and validate if it is coherent and self-consistent. \\n\\nThe paper presents a comparison of the approach with a long list of other models on three datasets, one of which is notably rebalanced to avoid overindexing on the positional closeness between emotion and cause. Additionally, it also presents an ablation study that ranks the value of each of the 5 steps above. EEEC improves over the SoTA prominently in the zero-shot setting and when there are more than one pairs of ECs in the documents. \\n\\n## Overall Recommendation:\\nThe paper, as it stands here, should be rejected because (1) the soundness of the paper in how it does step 1, how it compares to other approaches, how it reindexes the test set and why it doesn't report numbers of other comprehensive competitors on the reindexed benchmark is unclear making the final outcomes less strong (2) the writing is unnecessarily complex and has many unsubstantiated claims (3) reproducibility is unclear without a clear description of prompts (4) some methods like word level sentiment score aggregation don't have simple and most intuitive baselines such as sentence level sentiment analysis.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper approaches the ECPE problem with a deep understanding of the nuances of the task which enables the authors to use various task specific heuristics for solving subproblems.\\n1/ Originality: The paper's main novelty comes through as the combination of the 5 steps and chaining them in LLMs though each of them individually has been explored in the past. The paper presents a de-biased dataset to overcome the bias towards the positional closeness of an emotion and its cause which is a dataset artifact.\\n2/ Significance: The results indicate that the approach especially stands out when dealing with multiple pairs of ECs in the task and similarly in the zero-shot setting.\", \"weaknesses\": \"## Methodology\\n1. The method for sentiment based word level score aggregation is not sufficiently motivated. Specifically, (a) not normalizing the score penalizes short sentences, (b) sentences with negations may not be handled since scores are word level without using sentence level context and semantics (c) there is no comparison of this rule-based word level scoring with an LLM's inherent ability to assess sentiment. For example, a simple comparison would be to instruct the LLM to give sentence level scores/ranks based on overall sentiment as an explicit step and use that instead of this word level scoring. \\n2. It is unclear how L367-369 would work. If the emotion clauses finally generated by the LLM are not verbatim but still marked correct, the task as defined in 3.2 does not hold. \\n3. Table 1 shows the more \\\"trustworthy\\\" dataset titled 'Rebalanced CN dataset' but many comparison models don't have that filled in. The other columns seem self-reported? \\n\\n\\n## Unsubstantiated claims:\\n1. L83-84 claim that using any of the extra information requires large amounts of labelled training data. This is not true, since you could use any of this information by weak labeling using one of these models or prompting the LLM to specifically consider these factors and even make these scores explicit in a step-by-step approach. \\n2. L95-96 is a claim that document level processing will \\\"inevitably consider redundant information\\\". It is unclear what this is based on, specially when LLMs today could be prompted to find relevant information first. \\n3. L429-431: When comparing to DECC, the paper says that the improvement comes from Step 1. How that isolated conclusion is made is unclear. \\n\\n## Clarity \\n1. Prompts are not clearly stated per step, except for Figure 1 (which is a block diagram making the exact prompts hard to infer). \\n2. Related work and Table 1 have a large list of other models to compare against, but based on Sec 4.3.2 the paper makes a case to evaluate only on their rebalanced dataset. For this, Table 1 does not have results for many of the comparison models. Also, related work talks of a lot of models but does not tease apart the specific differences between theme, draw upon themes across them or draw out the novelty of EEEC over them. \\n3. L108-109 shows a 5 step process. Then L208-209 talks about 3 key phases. Finally, in section 3.4-3.6 it is again broken down to a 5 step process. Consistency in this will help the readability overall.\\n4. L339-349 talk about the analyze and validate steps. Based on Figure 1, it seems like this basically means asking the LLM to analyze all inputs and propose an answer. And next, validation seems like an LLM reflection process to confirm final answers. Clarify the working of these steps in the actual phase descriptions.\", \"questions\": \"1. Why do you consider it a novelty of this paper to give importance to the experiencer? Per your own literature survey too, in L160-161, Lee et al have already proposed experiencer based modeling since 2023. Do you mean that the combination of experiencer with the other components of your pipeline are unique?\\n2. In step 1, section 3.4.1, L256 onwards, the sentiment score is aggregated over all words without normalizing, is that accurate? If so, wouldn't the longer sentences game the threshold more easily and always be filtered in? What is the reason to not normalize the scores? \\n3. Per Figure 1, you give the scores to the LLM and ask it to filter the clauses for a threshold? Why is this an LLM call and not a deterministic code piece to filter clauses based on the threshold? Also, how is the threshold of 5.0 decided? \\n4. Figure 1 shows some prompt structure in english while appendix has others in just Chinese. Can you please add detailed and specific English prompts including how the sentiment scores are passed to the LLM.\\n5. What are the implications of LLM hallucination where the clause changes as shown in L366-369? If a clause is not extracted verbatim, why should it be considered correct? Doesn't that break the task definition itself per Section 3.2?\\n6. L257-259: Is this observation based on this current benchmarking data? If so, that would be contamination. \\n7. Why is emotion classification done with experiencer extraction and not with emotion filtering in step1?\", \"flag_for_ethics_review\": \"['Yes, Other reasons (please specify below)']\", \"details_of_ethics_concerns\": \"In Figure 1, the paper would benefit from some ethical considerations regarding stereotypical gender roles and objectifying language, such as the use of \\\"sexy\\\" for the wife in the example which is discussing relationship details in a couple. This terminology may risk reinforcing harmful stereotypes and promotes reductive views of gender. Such language can diminish the perceived complexity of emotions in both genders.\\n\\nI believe this is exaggerated since this is not a native English example and some of the tone may be coming in from the translation. I do not believe this was intentional from the authors.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"The paper introduces the Emotion-Experiencer-Event-Cause (EEEC) framework, which improves the accuracy of finding emotion-cause pairs in text. Emotion-Cause Pair Extraction (ECPE) involves identifying emotions and their causes in text, useful for sentiment analysis. Traditional methods have issues with bias and lack of specialized knowledge. EEEC tackles this by breaking down the task into steps, with a focus on identifying \\u201cexperiencers\\u201d (who feels the emotion) to better pinpoint causes. Using large language models like GPT-4, the framework shows improved performance in both English and Chinese without additional training.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Strengths:\\n1. The paper is well-organized and easy to read.\\n2. By using LLMs, EEEC achieves promising results without fine-tuning, which suggests adaptability across languages and datasets.\", \"weaknesses\": \"Weaknesses:\\n1.\\t The framework\\u2019s reliance on LLMs may limit its application in scenarios where computational resources are restricted.\\n2.\\tThe multi-step approach, while effective, introduces potential challenges in managing and calibrating each sub-task, which could complicate implementation in practical scenarios.\\n3.\\tErrors in initial steps, especially emotion clause identification, can propagate through subsequent stages, impacting final accuracy. Although the authors attempted to mitigate the error propagation, the multi-step method may inevitably suffer from this issue, especially in processing hard samples.\\n4.\\tEEEC may struggle with implicit emotion expressions or subtle emotional nuances, as noted in some case studies.\\n5.\\tAlthough zero-shot results are promising, performance still lags behind fine-tuned models on specific datasets, suggesting that supervised approaches retain an edge in precision.\\n6. The presentation of table 1 and the introduction of baselines are not well-organized. There is no need to include numerous baselines (SOTA and representative methods are enough) )as you cannot analyze all the baselines. \\n7. The format of tables and figures are not consistent across the paper, e.g., table 1.\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces the Emotion-Experiencer-Event-Cause (EEEC) framework, a multi-step reasoning approach for emotion-cause pair extraction (ECPE). By incorporating experiencer identification, prior sentiment knowledge, and logical association between emotion and cause clauses, EEEC aims to improve the accuracy and robustness of extracting emotion-cause pairs, particularly in zero-shot scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Integrates domain-specific knowledge and experiencer identification for accurate emotion-cause extraction.\\n2. Achieves strong zero-shot performance, surpassing some supervised methods.\", \"weaknesses\": \"1. The proposed methods rely heavily on manually designed rule, which limit their effectiveness in emotion-cause pair extraction, a nuanced area of research. The learning methods section lacks inspiration, and overall, it is not very engaging.\\n2. It is generally known that the meanings of words change according to the context. Therefore, word-level sentiment domain knowledge is not well-suited for emotion-cause pair reasoning tasks, which require a deep understanding of context.\\n3. The paper focuses only on cause clause detection, without explaining the rationale behind emotion-cause pair extraction, which is more intuitive for human understanding.\", \"questions\": \"In Figure 3, the authors only highlighted the words in yellow in the Chinese text, overlooking the English part, which impacts user comprehension.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EIwGR0w8VG | Scalable Approximate Message Passing for Bayesian Neural Networks | [
"Romeo Sommerfeld",
"Christian Helms",
"Ralf Herbrich"
] | Bayesian neural networks (BNNs) offer the potential for reliable uncertainty quantification and interpretability, which are critical for trustworthy AI in high-stakes domains. However, existing methods often struggle with issues such as overconfidence, hyperparameter sensitivity, and posterior collapse, leaving room for alternative approaches. In this work, we advance message passing (MP) for BNNs and present a novel framework that models the predictive posterior as a factor graph. To the best of our knowledge, our framework is the first MP method that handles convolutional neural networks and avoids double-counting training data, a limitation of previous MP methods that causes overconfidence. We evaluate our approach on CIFAR-10 with a convolutional neural network of roughly 890k parameters and find that it can compete with the SOTA baselines AdamW and IVON, even having an edge in terms of calibration. On synthetic data, we validate the uncertainty estimates and observe a strong correlation (0.9) between posterior credible intervals and its probability of covering the true data-generating function outside the training range. While our method scales to an MLP with 5.6 million parameters, further improvements are necessary to match the scale and performance of state-of-the-art variational inference methods. | [
"Message Passing",
"Bayesian Neural Networks",
"Uncertainty Estimation",
"Factor Graphs"
] | Reject | https://openreview.net/pdf?id=EIwGR0w8VG | https://openreview.net/forum?id=EIwGR0w8VG | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yA8noKt56A",
"w6CQZkdXkZ",
"udgb9qOZNi",
"tOjHg6UhBQ",
"jjnMyLljkT",
"Z3yvSOaH6Z",
"RL6XBTjP91",
"QlswcpY1hI",
"Nj69OORjl8",
"FOcoSyQd4y",
"3imR0iIRMX"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"decision"
],
"note_created": [
1732312718489,
1732314616845,
1730009682583,
1730281469617,
1732051193193,
1732314218936,
1734680138914,
1732784889734,
1730609649872,
1732785077696,
1737523830742
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7302/Reviewer_iRHR"
],
[
"ICLR.cc/2025/Conference/Submission7302/Reviewer_iRHR"
],
[
"ICLR.cc/2025/Conference/Submission7302/Reviewer_yEQC"
],
[
"ICLR.cc/2025/Conference/Submission7302/Reviewer_mv2H"
],
[
"ICLR.cc/2025/Conference/Submission7302/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7302/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7302/Area_Chair_JxEN"
],
[
"ICLR.cc/2025/Conference/Submission7302/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7302/Reviewer_iRHR"
],
[
"ICLR.cc/2025/Conference/Submission7302/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"comment\": \"Hello authors and thank you for your reply. I have read your reply, and it looks like your method is indeed not scalable. That being said I think that the paper is not very strong in its current form. Since the method is not scalable to large datasets, then it should be focused more on theoretical advances such as some kind of proof of convergence.\"}",
"{\"comment\": \"Yes I am well aware of that and from the current literature. Message parsing algorithms are not very famous or \\\"trending\\\" lately so it's good to see some kind of alternative approaches to BNNs. My main concern is still that the method is not very scalable. For example most new works should have something involving at least a Res-Net18 if they are experimental papers. And since your work is not scalable for that I would highly suggest to submit it with more theoretical findings.\"}",
"{\"summary\": \"This paper advances message passing (MP) methods for Bayesian neural networks by introducing a novel framework that models the predictive posterior as a factor graph. The key technical contribution is being the first MP method to handle convolutional neural networks while avoiding double-counting of training data, which was a key limitation of previous approaches that led to overconfidence. Their method shows particularly strong performance in data-constrained settings, achieving 94.62% accuracy on MNIST with LeNet-5 using only 640 samples (compared to 22.15% for SGD), while also demonstrating better calibration and out-of-distribution detection than standard approaches. While the method scales to networks with 5.6 million parameters and requires minimal hyperparameter tuning, it currently has higher computational overhead compared to standard training methods and has not yet matched the scale of state-of-the-art variational inference approaches. Nevertheless, the work represents an important step forward in providing more balanced uncertainty estimates, especially in scenarios with limited training data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"First message passing method to handle convolutional neural networks while solving the data double-counting problem\", \"Exceptional performance with limited data (94.62% MNIST accuracy with 640 samples vs 22.15% for SGD)\", \"Better calibration and out-of-distribution detection than standard approaches\", \"Scales to practical network sizes (5.6M parameters) with minimal hyperparameter tuning\", \"Clear theoretical framework with thorough derivations and implementation details, making it reproducible\"], \"weaknesses\": [\"Significantly slower than standard training methods (96.4s vs 2.3s on GPU for LeNet-5 training)\", \"Does not yet match the scale of state-of-the-art variational inference methods\", \"Limited empirical evaluation - only tested on MNIST and synthetic data, lacking results on more complex datasets\", \"Memory intensive during training, and requiring approximately twice the memory of standard approaches during inference.\", \"No comparison against other Bayesian methods like variational inference or MCMC in terms of uncertainty quality\", \"Limited discussion of how the approach handles different neural network architectures beyond MLPs and basic CNNs\", \"Does not scale to large neural networks beyond 5.6M parameters.\"], \"questions\": [\"Can you provide empirical results on more complex datasets beyond MNIST (e.g., CIFAR-10, ImageNet) to better demonstrate practical applicability and scalability?\", \"How does your method compare to modern variational inference approaches (like VOGN or IVON) in terms of uncertainty quality, calibration, and computational costs?\", \"What are the key bottlenecks causing the 40x slower training time compared to SGD, and are there potential optimizations to reduce this gap?\", \"How does your method handle modern neural network architectures with skip connections, batch normalization, or attention mechanisms - is the factor graph framework adaptable to these components?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a scalable message-passing (MP) framework for Bayesian neural networks (BNNs), aiming to improve uncertainty quantification in deep learning models. BNNs are known for their potential in high-stakes domains due to their ability to capture predictive uncertainty. Traditional methods, like variational inference (VI), struggle with overconfidence and hyperparameter sensitivity, prompting the development of this MP framework. The authors\\u2019 approach utilizes factor graphs to model the predictive posterior, demonstrating that this avoids common issues like double-counting training data, which leads to overconfidence in other MP methods. Their implementation shows better calibration and out-of-distribution detection than standard SGD, particularly in data-constrained settings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The MP approach achieves superior calibration and out-of-distribution detection, crucial for applications in high-stakes domains where understanding model uncertainty is vital.\", \"The method shows competitive accuracy, especially in scenarios with limited data, outperforming standard approaches like SGD on tasks with restricted data availability.\", \"This framework is the first to apply message passing effectively to convolutional neural networks, marking a significant advancement over previous MP implementations. It opens up a relatively unexplored branch which is very important for the advancement of the field.\"], \"weaknesses\": [\"The MP method is computationally demanding, resulting in slower training times compared to standard approaches such as SGD and VI. The method needs substantial memory, especially during training, making it challenging to apply to memory-intensive tasks or very large networks without further optimization.\", \"Although it scales better than prior MP methods, the approach still lags behind VI in handling large, complex models and datasets.\", \"The method is based on some assumptions and oversimplification, such as the latent variables have a Gaussian distribution. This might not hold in complex scenarios with multimodal patterns.\", \"There is no proper time measurement of the method to have a crisp understanding of the computation complexity.\"], \"questions\": [\"Would it be feasible to apply this approach to a larger network or a more challenging dataset?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Evaluation on CIFAR-10 (preliminary)\", \"comment\": \"Thank you for your review!\\n\\nWe now have trained a convnet with our approach, using roughly 900k parameters on CIFAR-10, and achieved over 77% validation accuracy after 25 epochs (with no signs of saturation). To provide perspective, we trained the same model using Torch with the Adam optimizer (with default parameters\\u2014which, after a hyperparameter search, turned out to be quite optimal) and achieved a similar validation accuracy of around 75% after 25 epochs. Of course, the Torch library is highly optimized, and our method inherently has an overhead compared to training a deterministic network. So when giving the two methods the same compute budget, Torch + Adam would almost certainly achieve better accuracy. We do not claim otherwise, and that is not the point. Rather, we want to demonstrate that it is possible to train convolutional BNNs with a gradient-free message-passing algorithm while avoiding the double counting problem.\\n\\nThe reason we did not do this prior to submission was that, in the implementation state of the main branch of the repository, the GPU memory footprint exceeded 80\\u202fGB when training such a model on CIFAR-10. Prior to submission, we did not have time to implement our ideas on how to reduce the GPU memory footprint. Now we have implemented one of our ideas\\u2014namely, frozen batch buffering. Instead of storing the weight aggregates of inactive batches, which we dub frozen batches, entirely on the GPU, we now store them in main memory and buffer them into GPU memory once they are needed. This made it possible to train the above-mentioned model on a single GPU.\\n\\nIn the revised version of the paper, we plan to:\\na) provide a more thorough evaluation on CIFAR-10;\\nb) explain in detail how the memory requirements scale with parameters such as batch and model size; and\\nc) give ideas for future work on how to decrease training time and memory consumption.\\nProbably, this will not leave as any space to squeeze in proofs from the appendix into the main text.\\n\\nAdditionally, we would appreciate your feedback on which areas of the paper are particularly difficult to follow. This will help us focus our efforts.\", \"related_concern\": \"Considering that state-of-the-art models now have literally trillions of parameters and are trained on petabytes of data, do you think the term \\\"scalable\\\" in the paper's title is fitting with modern standards?\"}",
"{\"comment\": \"Thanks for your reply!\", \"one_point_we_would_like_to_stress\": \"the difference of runtime on GPU of our approach compared to non-Bayesian optimized PyTorch implementation is only a factor of 6 (and PyTorch is highly optimized over the past few years) while the message passing algorithm for Bayesian neural networks offers the advantage of calibrated model uncertainties. This permits both to learn highly accurate predictive models on a subset of training data *and* increase the accuracy in prediction much faster when rejecting test examples based on these calibrated probabilities.\\n\\nWe would also like to note that our submission is one of the first - maybe *the* first - that demonstrates how to scale the training of calibrated Bayesian Neural networks with millions of weights and trained on tens of thousands of training examples. This is an algorithmic advancement over variational inference-based learning of Bayesian neural networks (which also overfit more than our inference technique).\"}",
"{\"metareview\": \"The paper develops approximate message passing as an approximate inference technique for Bayesian neural networks. The procedure is fairly involved and requires representing the neural network as a factor graph. Then, the authors perform loopy belief propagation with approximations to the message passing steps. The final results is a diagonal Gaussian approximation to the posterior. The authors show that the method is competitive with variational inference-based methods on a synthetic dataset with MLP and on CIFAR-10 with a small convolutional model.\", \"strengths\": [\"The main strength of the paper is in developing novel approximate message passing methodology applicable to (small) neural network models.\", \"The authors develop and describe a number of tricks and approximations to get the method to run and address numerical instabilities.\", \"The method is competitive with variational inference in small-scale experiments.\"], \"weaknesses\": [\"The proposed method is involved, and it is not clear what are the advantages over existing Bayesian methods for neural networks, even variational inference.\", \"The authors do not do a careful literature review of existing Bayesian methods and do not compare to relevant baselines.\", \"Stochastic gradient Monte Carlo methods are popular, scalable, and avoid many limitations of VI [1, 2].\", \"For the scale of experiments in the paper, full HMC is easily applicable [3]\", \"Deep ensembles should be treated as a relevant baseline for uncertainty calibration [4]\", \"In the rebuttal, the authors say:\", \"> We would also like to note that our submission is one of the first - maybe the first - that demonstrates how to scale the training of calibrated Bayesian Neural networks with millions of weights and trained on tens of thousands of training examples.\", \"SWAG [5] proposed in 2019 was run on ImageNet (1M+ datapoints, 1k classes) with ResNet-152 models.\", \"Other scalable Bayesian methods include Laplace approximation-based methods, MC-dropout, etc.\", \"The empirical evaluation is not sufficient for confirming that the proposed method provides any advantages over baselines.\", \"There are only two experiments: synthetic dataset and CIFAR-10. For CIFAR-10, the accuracy is 78% for the best method. The state-of-the-art accuracy on CIFAR-10 is 99+% for pretrained models and 97+% for models trained from scratch. While the authors use a small model, 78% is so far from the state-of-the-art that this result cannot be used to argue for the method providing a practical improvement.\", \"Moreover, in the experiments, the proposed MP method loses to the AdamW baseline on accuracy.\", \"While the method provides improved calibration, I don't believe that is the correct target for Bayesian methods. Calibration is a highly-sensitive metric that can be improved with simple interventions [6]. I believe Bayesian methods should aim to improve accuracy, or provide some other benefit over the baselines.\", \"The method is computationally expensive compared to baselines, at least in the current implementation. The current implementation also does not support normalization layers and residual connections.\"], \"decision_recommendation\": \"The paper makes a meaningful contribution in developing an approximate message passing algorithm for Bayesian neural network. In its current form, it is a proof of concept, as the method is not applicable to modern models and does not provide significant improvements in the experiments. The empirical evaluation is very limited and missing relevant baselines, but even compared to the presented baselines the method does not make a significant improvement. Thus, I recommend rejecting the paper in its current form, and encourage the authors to improve the empirical evaluation.\\n\\n[1] Bayesian Learning via Stochastic Gradient Langevin Dynamics; Max Welling, Yee Whye Teh\\n\\n[2] Stochastic Gradient Hamiltonian Monte Carlo; Tianqi Chen, Emily B. Fox, Carlos Guestrin\\n\\n[3] What Are Bayesian Neural Network Posteriors Really Like?; Pavel Izmailov, Sharad Vikram, Matthew D Hoffman, Andrew Gordon Gordon Wilson\\n\\n[4] Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles; Balaji Lakshminarayanan, Alexander Pritzel, Charles Blundell\\n\\n[5] A Simple Baseline for Bayesian Uncertainty in Deep Learning; Wesley J. Maddox, Pavel Izmailov, Timur Garipov, Dmitry P. Vetrov, Andrew Gordon Wilson\\n\\n[6] On Calibration of Modern Neural Networks; Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger\", \"additional_comments_on_reviewer_discussion\": \"The reviews for the paper were mixed: 6,6,3. The reviewers raised concerns with scalability of the proposed method, and the limited evaluation. The authors responded with new experiments on CIFAR-10. Notably, the reviewers were initially impressed with results in a data limited setting on MNIST. However, the authors removed those experiments completely from the paper during the rebuttal phase:\\n> While testing against stronger baseline methods than SGD we noticed that the superiority of our approach in data-constrained settings unfortunately vanishes. There we did not include data-constrained experiments on the CIFAR-10 dataset. This makes the MNIST experiments irrelevant, especially since it is a toy dataset anyways.\"}",
"{\"title\": \"Answers to Questions\", \"comment\": [\"Thank you for your review :) Your feedback has led to a major improvement of the paper in our view.\", \"The submitted revision contains answer to all four of your questions. In a nutshell:\", \"In the revision we gave results on training a convnet of 890k parameters on CIFAR-10.\", \"We compared the results against two of the strongest SOTA baselines available, namely AdamW and IVON each with a cosine annealing learning rate schedule.\", \"In the conclusion section we are 100% transparent about major limitations, in particular the increased training time compared to AdamW. In a subsequent paragraph we address potential solutions to these issues.\", \"Also in the conclusion we outline ideas on how to extend our framework to ResNets and transformers.\", \"While testing against stronger baseline methods than SGD we noticed that the superiority of our approach in data-constrained settings unfortunately vanishes. There we did not include data-constrained experiments on the CIFAR-10 dataset. This makes the MNIST experiments irrelevant, especially since it is a toy dataset anyways. We also removed the computational performance section. Instead we state in the limitations part of the conclusion that the training time, while scaling linearly in the model and dataset size, is typically on the order of one to two orders of magnitude higher than with AdamW using the highly optimized pytorch framework.\"]}",
"{\"summary\": \"They propose a scalable message-passing framework for Bayesian neural networks and derive message equations for various factors, which can benefit factor graph modeling across domains. The method is applied for both CNNs and FCNs on MNIST.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Good contribution for the message passing community and especially the experiments on CNNs.\"], \"weaknesses\": [\"The writeup in overall is difficult to follow and the motivation of the work is not clear.\", \"Poor experimental evaluation, especially since the method is tested only against synthetic data and MNIST. It should be tested on bigger models than LeNet (which is quite outdated), and it should also be tested against CIFAR-10 which is a common benchmark on BNNs.\", \"All of the proofs are in the Appendix. I would suggest to squeeze something in the main text since there is still space in page 10. Probably the main proof of global minimization objective?\"], \"questions\": \"My main question is why have you not evaluated the proposed approach on CIFAR10 and you evaluated only against MNIST? MNIST is considered as a toy dataset so we do not know if your method generalizes in bigger models even though you scaled the MLP model on MNIST. In the current state the paper is not strongly either theoretically or experimental and therefore it needs more revisions.\", \"misc\": \"line 178 \\\"predictionsfor\\\" -> \\\"predictions for\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Answer to Question\", \"comment\": \"Thank you for your review! We appreciate that!\\n\\nWe hope your question is answered in our reply to reviewer yEQC.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}"
]
} |
EIXZXPz7jU | FMS PINN: Flow-matching sampling for efficient solution of partial differential equations with source singularities | [
"Yana Khassan Nibal",
"Jiexing Gao",
"Yimin Huang",
"Fedor Buzaev"
] | Singularities in the source functions of partial differential equations (PDEs) can pose significant challenges for physics-informed neural networks (PINNs), often leading to numerical instability and necessitating a large number of sampling points thereby increasing the computational time. In this paper, we introduce a novel sampling point selection method to address these challenges. Our approach is based on diffusion models capable of generative sampling from the distribution of PDE residuals. Specifically, we apply the optimal transport coupling flow-matching technique to generate more sampling points in regions where the PDE residuals are higher, enhancing the accuracy and efficiency of the solution. In contrast to existing approaches in the literature, our method avoids explicit modeling of the probability density proportional to residuals, instead using the benefits of flow matching to generate novel and probable samples from more complex distributions, thereby enhancing PINN solutions for problems with singularities.
We demonstrate that this method, in certain scenarios, outperforms existing techniques such as normalizing flow-based sampling PINN. Especially, our approach demonstrates effectiveness in improving the solution quality for the linear elasticity equation in the case of material with complex geometry of inclusion. A detailed comparison of the flow matching sampling method with other approaches is also provided. | [
"physics informed neural networks",
"Adaptive sampling"
] | Reject | https://openreview.net/pdf?id=EIXZXPz7jU | https://openreview.net/forum?id=EIXZXPz7jU | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yVjnCFYDxC",
"vnhlXSPngx",
"F4R0lmNkdk",
"7uvZsvNwrD",
"2TFNZ6hnsk",
"1VOeHszAmU"
],
"note_type": [
"decision",
"official_review",
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1737524101526,
1729015374647,
1734682978034,
1730211975888,
1730833425868,
1730383265973
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11074/Reviewer_g3T5"
],
[
"ICLR.cc/2025/Conference/Submission11074/Area_Chair_dUjE"
],
[
"ICLR.cc/2025/Conference/Submission11074/Reviewer_W7jW"
],
[
"ICLR.cc/2025/Conference/Submission11074/Reviewer_8sXG"
],
[
"ICLR.cc/2025/Conference/Submission11074/Reviewer_SCKW"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper proposes a method using flow matching for adaptive sampling of residual points in physics-informed neural networks (PINNs), aimed at improving stability and accuracy, particularly for PDEs with singularities. The approach is compared against normalizing flow-based sampling, demonstrating advantages in certain scenarios.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper demonstrates improvements of the flow matching-based sampling method over the previous normalizing flow approach. The proposed methodology might provide insights for the community.\", \"weaknesses\": [\"The writing could be improved. Many citations should use `\\\\citep` instead of `\\\\citet`. Some symbols are not properly defined, e.g. $\\\\mathbb{S_k}$ appears in Equations (2) and (3), long before it is introduced in Algorithm 1. The definition of the optimization problem in Equation (7) is nonstandard. Additionally, some acronyms are not properly introduced, such as DAS and AAS.\", \"The structure of the paper is confusing. Section 3.4 appears to describe the author's original work but falls under \\\"3. Related Work.\\\" Furthermore, there is a \\\"Section 3.1\\\" comes after Section 3.4, which disrupts the logical flow.\", \"Some figures have random gray lines surrounding them (e.g., Figures 2(b,c) and 4(b), among others).\", \"The novelty of the proposed method seems limited. The work replaces GAN or normalizing flow in previous approaches with flow matching, which is known to have some advantages in certain scenarios. This incremental improvement may not be sufficient to justify the claims of significant advancement.\", \"While the work might be useful, its practical impact on solving PDEs appears limited. Practitioners typically care about whether they should use this method for solving their PDE problem, including aspects like accuracy and computational speed. The paper primarily focuses on showing that this ML method is more accurate than a previous ML method, without adequately addressing the broader context of whether PINNs are appropriate for the given problem compared to traditional numerical methods.\", \"The use of the term \\\"source singularities\\\" might be inappropriate. The source terms in the Poisson equations presented are sharp Gaussians at best, not truely singularities (delta function). These singularities can lead to discontinuous gradient or solutions. See methods for elliptic interface problems. Even in PINN, there are more exact method to handle the singularities, e.g, Tseng, Y.-H., Lin, T.-S., Hu, W.-F., Lai, M.-C., 2023. A cusp-capturing PINN for elliptic interface problems. Journal of Computational Physics.\", \"The two inset figures in Figure 1 (\\\"train flow matching generative model\\\" and \\\"construct sample from flow matching generative model\\\") do not appear to be original work. Similar images can be found in Figures 7 and 21 on [this website](https://mlg.eng.cam.ac.uk/blog/2024/01/20/flow-matching.html).\", \"The authors should provide a discussion on computational time. For instance, in Section 4.1, 28,000 points are introduced at each resampling stage, leading to a substantial increase in point count during the whole training process. Since\\\"high computational costs\\\" was mentioned as a drawback of previous approaches, it is unclear that the proposed method can alleviate this issue.\", \"The authors should report the mean and standard deviation of different trials, as sampling is required in this method. This would provide a better understanding of the variability and reliability of the results.\"], \"questions\": [\"In Algorithm 2, why is the ODE solved backward from $t = 1$ to $t = 0$? This seems to contradict Equation (10). Could the authors clarify this?\", \"Based on Figure 4(a), it is unclear why a similar distribution could not be achieved using simpler methods like RAR (or other methods that do not require a neural network) or the weighted bootstrap step in this work. What happens if we remove the flow matching step?\", \"The authors mention both AAS and DAS but only compare their method with DAS. Since AAS claims to improve upon DAS, how does the proposed method compare to AAS?\", \"In Table 1, the accuracy of DAS is reported to be on the order of 1, which seems lower than values cited in the literature (around 1e-2, see Table 1 in AAS paper and table 1 and 2 in DAS paper). Could the authors explain this discrepancy?\", \"In Section 4.2, it would be helpful if the authors showed the material interface and the final distribution of collocation points, allowing readers to better understand how the proposed method works.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"Reviewers agree to reject this paper with its current state due to lack of novelty, inconsistencies in text, and missing clarifications.\\nAuthors did not respond to the reviewer comments. Therefore the paper is recommended for rejection.\", \"additional_comments_on_reviewer_discussion\": \"Authors did not rebut.\"}",
"{\"summary\": \"This paper proposes a resampling procedure for PINN training when the right-hand side is sharply peaked. The idea is to use flow matching to generate new samples in regions where the residual is high; in each iteration, more quadrature points are added to the PINN objective function to penalize points where the PDE is harder to solve. The method is tested on the Poisson equation and a simple version of elasticity.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Resampling for PINNs makes sense and can help identify regions where the solver needs extra \\\"help\\\" to find the right solution.\", \"weaknesses\": \"Overall, the method seems to be a fairly heuristic extension of existing works. Although I see the authors are applying ideas from the flow matching work, they do not articulate important details of their algorithm, e.g. the distribution from which flow matching is attempting to draw samples. The tests here show preliminary evidence that the method has some value but are far from conclusive (just 3 PDE examples total as far as I can tell).\", \"questions\": \"I have also included comments in this section.\\n\\n***\\n\\nCareful with LaTeX formatting bugs --- e.g., in first paragraph of the introduction there is a stray period (line 38), backward quotation marks (line 42), and \\\\citet{} style citations that should be \\\\citep{}. There are also some mild grammar issues (e.g., missing \\u201cthe\\u201d before \\u201cPINN loss function\\u201d on line 43). These didn\\u2019t impede understanding, but I would suggest a thorough edit before the final version of this paper is published.\\n\\nIn the unnumbered equation above (2), should the norms be squared? And do you need a parameter trading off between the interior and boundary terms?\\n\\nShould the sums in (2)-(3) be integrals? Are the samples x_i re-drawn in every iteration or kept fixed during the training procedure?\\n\\nThe last three paragraphs of section 3.1 seem like they\\u2019re missing a connection to what came before them. In particular, these paragraphs describe generic methods for variational inference and sampling, but it\\u2019s not entirely clear how these methods are applied to PINNs specifically.\", \"line_166\": \"\\u201cpf\\u201d\\n\\nLine 171, \\u201cthe generative algorithm\\u201d --- one of many\\n\\nShould the loss in (6) be squared?\\n\\nIt seems a lot of section 3.1 is repeating what\\u2019s in the flow matching paper. Is all of this discussion needed to describe the proposed new algorithm in the paper?\\n\\nExposition-wise, I would suggest reallocating the \\u201creal estate\\u201d in the paper\\u2019s discussion substantially. In particular, the first 4 pages of the paper are background, then there is exactly 1 page describing the method, and then the remainder of the paper describes results. It seems valuable to extend the discussion of the proposed method, making sure the algorithm is described in full detail and that appropriate properties of the sampling method (e.g., making sure it recovers solutions to the PDE as the neural network gets more neurons) are stated carefully. The exposition of the algorithm in section 3.4.1 is quite terse --- for example, precisely what distribution is the flow matching method sampling from?\\n\\nEq (10) is deterministic other than the initial condition X_0 --- there is no Brownian motion term, for example. Why use stochastic PDE notation?\\n\\nAlgorithm 2 seems to be standard \\u201cforward Euler\\u201d solution of an ODE and can probably be omitted.\\n\\nAlgorithm 1 has many vague steps. For example, what algorithm is used for weighted bootstrap resampling? How is the vector field \\u201ctrained\\u201d and for how many steps? Do you prune the set of points S_k in each step or does it get larger and larger in each iteration?\\n\\nFigure 1 did not help me understand the algorithm and seems quite abstract.\\n\\nIs it possible to replace the peaks in example (13) with true delta functions, or does the right-hand side of the Poisson equation in the proposed method need to be differentiable?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents a method for automatic and adaptive sampling of domain points during the training of a physics-informed neural network (PINN) model. The main idea is to use a flow-matching technique to learn the distribution of the ''correct'' density of the domain points from the current residuals. This is similar to the ''mesh refinement'' procedures common in PDE applications. At a typical ''refinement'' step in the current method, the residuals are used to train a flow-matching model to arrive at the unknown distribution of the domain points. Now, this model is used to generate point samples for the next loop of training. The authors present two examples with singular behaviour, and compare their results with one other work, which is called ''DAS-PINN.'' In both these cases, \\\"FMS-PINN\\\" is seen be the superior method.\\n\\n`Overall impression:` The major idea, i.e., using a generative model to generate PINN points for training is not new. The marginal contribution here is using flow-matching technique to do so. Nevertheless, it makes for an interesting read, and the results, though neither comprehensive nor conclusive, are interesting as well. Decent paper, but not ready for publication.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is mostly well written, but some parts are confusing (see questions).\", \"Interesting application of flow-matching to generate sample points for PINN training.\"], \"weaknesses\": [\"The mathematical exposition (Sec 3.4) is not well written (see questions).\", \"The authors compare their results with one existing method (DAS-PINN, Tang et al., 2023). But only one of the examples from Tang et al. is recreated in this paper, and that too is relegated to the appendix. This is the only point of exact comparison, and it is inconclusive whether one method is better than the other.\", \"(minor) Line 325: (probably) incorrectly refers to Figure 12\"], \"questions\": [\"Please rewrite the first subsubsection under Sec 3.4 (titled ''3.1 Flow matching'' for some reason!). The probability densities $ p_0 $ and $ p_1 $ are used without any introduction. Line 175: :''As sampling is based on ...\\\" is unintelligible. Equation 6 is written without any preceding or succeeding text. Equation 7 can be written in much simpler notation (a standard minimization statement can be used). The whole subsection feels very disjointed. I think, rather than attempting a generic text on flow-matching, this subsection can be used for a better exposition to the particular application you are aiming for.\", \"Sec 3.4.1 presents the main contribution of this paper, but it feels hurried. Please explain the ''weighted bootstrap procedure'' in detail, or cite appropriate references.\", \"Sec 4.1: How do the results compare if you simply sample more points near the peak, since the peaks are known a priori?\", \"Sec 4.2: please provide a better description of the problem, e.g., with a schematic diagram, or cite a reference text.\", \"All the presented examples are where the ''singularity'' regions are known a priori. How would this method work when a PDE solution naturally develops regions that require refinement, e.g., flow problems that develop boundary layers. This can be shown by taking a simple advection-diffusion equation with Dirichlet boundary conditions.\", \"How was the DAS-PINN code configured? What were the hyper-parameters? As someone who has not run either of the codes, how can I convince myself that due diligence was done on the DAS-PINN hyperparameters?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors propose a new strategy for re-sampling new training data points for training PINN networks. At each stage of the PINN training algorithm, they train a new flow-based generative model for sampling points in regions where the PDE residuals have large values. At each epoch, the training points are the union of the previous points and the new generated points.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Both the objective and the proposed algorithm are well explained.\", \"Most of the related works and useful concepts are explained.\", \"The contribution makes sense. Indeed, using normalizing flows for resampling points seems to be an overkill, as it is not necessary to have explicit probability densities and invertibility.\", \"Different experimental conditions\"], \"weaknesses\": [\"You only compare with DAS PINN and no other methods. As you mention AAS-PINN and RAR in the related works, you should also compare with these methods. Especially, it would be interesting to have comparisons with methods that do not use generative models for re-sampling.\", \"Moreover I am questioning how fair is your comparison with DAS PINN. Indeed, you use the same number of training epochs / training points for both models, but both do not share the same architecture. It seems in Figure 4 b) and 10 that DAS-PINN has not fully converged yet. Can you provide the same curves with a larger number of epochs for both methods ?\", \"The \\\"weighted bootstrap resampling \\\" algorithm should be clearly explained and detailed. It is a major block of the method, and it is mentioned without further explanations.\", \"There are several typos, missing commas, english mistakes that need to be corrected.\", \"Color inconsistency in the plots of Figure 16.\", \"Sections 3.2 and 3.3 seem to repeat / should be merged.\", \"$p_1(x)$ is used without being defined. Is it a pushforward measure ?\", \"The explanation of the AAS method is not clear enough for me.\"], \"questions\": [\"Is $f_\\\\theta$ re-trained from scratch at each iteration, or fined-tuned ?\", \"It seems that the number of training points grows with the number of iterations. Is it really the case, or do you discard training points at each iteration ?\", \"How does the method scale to higher dimension ? It seems that the advantage of using flow-based generation over normalizing flow should be stronger in high dimension. Unfortunately, the authors experiment only up to dimension 5.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}"
]
} |
EHmjRIA4l2 | Compositional World Models with Interpretable Abstractions | [
"Vishwas Sathish",
"Rajesh P. N. Rao"
] | We present a modular and compositional approach to learning human-aligned world models via state-action hierarchies. Our approach is inspired by sensory-motor hierarchies in the mammalian brain. We model complex state transition dynamics as a sequence of simpler dynamics, which in turn can be modeled using even simpler dynamics, and so on, endowing the approach with rich compositionality. We introduce Composer, a practical method for learning complex world models that leverages hypernetworks and abstract states for generating lower-level transition functions on-the-fly. We first show that state abstractions in Composer emerge naturally in simple environments as a consequence of training. Incorporating a variant of contrastive learning allows Composer to scale to more complex environments while ensuring that the learned abstractions are human aligned. Additionally, learning a higher-level transition function between learned abstract states leads to a hierarchy of transition functions for modeling complex dynamics. We apply Composer to compositional navigation problems and show its capability for rapid planning and transfer to novel scenarios. In both traditional grid-world navigation problems as well as in the more complex Habitat vision-based navigation domain, a Composer-based agent learns to model the state-action dynamics within and between different rooms using a hierarchy of transition functions and leverage this hierarchy for efficient downstream planning. Our results suggest that Composer offers a promising framework for learning the complex dynamics of real-world environments using a compositional and interpretable approach. | [
"State-Action Abstractions",
"Predictive Coding",
"Hierarchical Planning",
"Compositional World Models",
"Contrastive Learning",
"Hypernetworks",
"Hierarchical Reinforcement Learning"
] | https://openreview.net/pdf?id=EHmjRIA4l2 | https://openreview.net/forum?id=EHmjRIA4l2 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"p3pAMUpL6j",
"cV3Roi82AA",
"WWwBxrPVA4",
"UcRxLnEBmx",
"SGKz326xuu",
"RrDqW8XRrB",
"6wZE4zNVai"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment"
],
"note_created": [
1729687851266,
1733221779221,
1733221736989,
1733221835447,
1730651731927,
1730633978463,
1733222252544
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13341/Reviewer_JL4b"
],
[
"ICLR.cc/2025/Conference/Submission13341/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13341/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13341/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13341/Reviewer_o9fL"
],
[
"ICLR.cc/2025/Conference/Submission13341/Reviewer_doSy"
],
[
"ICLR.cc/2025/Conference/Submission13341/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"The authors present Composer, a method for learning a hierarchical transition function where the low-level prediction of the next state depends on a high-level latent variable. The model uses hypernetworks, some self-supervised learning and some supervised learning from labels and is applied to a toy gridworld and to some Habitat 2.0 scenes.\\n\\nMy decision is to reject the paper as it is far from ready for publication.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"the general question addressed by the authors of learning useful state abstractions to get a hierarchical transition function is very relevant\"], \"weaknesses\": [\"the paper is unclear about its objectives: some studies performed in Sections 4.4 and 4.5 do not correspond to the objectives mentioned in the introduction, and the necessary elements to perform these studies are not described in the methods. The figure corresponding to what should be interpreted as the main contribution is rejected into an appendix. I suggest moving Fig. S10 back to the main paper and reconsider the introduction so that it incorporates the objectives of Section 4.4 and 4.5 (or refocus the paper and remove these sections).\", \"it is not clear whether the paper has a body of results that could be the contribution of a good paper, given that most of the experiments and conclusion build more on ongoing work and things to come and most of the results appear mainly preliminary. The authors should focus more on strengthening and expanding the completed results, providing more in-depth analysis and discussion of their significance.\", \"the methods are insufficiently described and not clearly formalized. See below for details and advices.\", \"the paper does not have a related work section. The authors should include a comprehensive related work section that covers key areas such as hierarchical reinforcement learning, world models, and neuroscience-inspired AI approaches. This will help contextualize their contribution within the field.\", \"the Composer system is not compared to any baseline. The authors should compare to relevant work in the experimental section.\", \"there is an ablation of removing the supervised contrastive loss, but this is the only ablation and it is not identified as such.\"], \"questions\": [\"I\\u2019m using this section more to criticize the current form and to suggest improvements to the authors, as I think the paper is too far from being ready for publication.\", \"the authors should ask themselves who did things similar to what they are trying to do, then write a related work section and compare their approach to baselines. This is only through such comparisons that we can determine whether their work is a useful piece of research or not. The answer \\u201cwe are the only ones to tackle this question is always wrong. For instance, the authors should have a look at this paper:\", \"Gumbsch, C., Butz, M. V., & Martius, G. (2021). Sparsely changing latent states for prediction and planning in partially observable domains. Advances in Neural Information Processing Systems, 34, 17518-17531.\", \"and other papers from the same authors. I\\u2019m quite sure that they will find many works they should compare themselves to.\", \"discovering the high level latent state is obviously the difficult question in the author\\u2019s setup. One expects some clever new idea to do this when reading the abstract and the introduction of the paper. But this is only in page 5 that the authors mention for the first time that they will use supervised contrastive learning. This appears as a late and unsatisfactory addition to their model after (probably) failing to use anything requiring less human engineered data. The authors should definitely be honest about their method since the beginning, as they generate expectations that are not fulfilled. The argument \\u201cbut humans also learn from labels\\u201d to counteract the negative impression it generates is also very weak and makes the situation even worse.\", \"the formalization is far from satisfactory.\", \"footnote 1 p. 4 specifies that there is a high level time period T which is not introduced and never described. How does the high level time change. It is defined manually? The authors should start section 2 with a problem statement where they describe the setup and all their assumptions\", \"the equations should come inside a sentence explaining what they are about\", \"In (1) H is the hypernetwork, right? This should be mentioned line 148. In (4) it is noted H_\\\\theta\\u2026\", \"Eq. (4) describes e, but e is not used anywhere anymore\", \"We have to guess that the authors will use Eqs (4) to (6) rather than [1) to (3), this is not clearly stated\", \"Line 182, the authors mention using a recurrent network that has never been described (nor any hyper-parameter of the method, by the way). This is where we guess that they use (4) to (6)\", \"In (8), we do not know what s_T is, footnote 1 does not help much. Is the lambda term a regularizer? This is not explicit at all.\", \"Make a sentence to describe what (9) and (10) are about.\", \"lines 186 sq. : \\u201cSince our hierarchical transition models are task-independent, the rewards obtained in any particular task do not directly affect the transition models. We intend to explore incorporating reward prediction (in addition to state prediction) at the lower level in future work (Hafner et al. (2020)).\\u201d \\u2192 this should move to a future work section (as many other statements about ongoing or future work\\u2026)\", \"Figure 4, the caption should conclude about what we should see from the right part. Actually, I would put the version with the contrastive loss first, and the ablation later in the paper.\", \"In Figs 5 and 6, does the x,y position of image patches mean something, or is it only their relative distance that matters? Would we get the same organization if we had many more patches, as should be the case for Habitat 2.0?\", \"line 376: \\u201creplacing autoencoders \\u2026 with ViT \\u2026 is straightforward\\u201d: so why didn\\u2019t the authors do it?\", \"In Section 4.4, the authors introduce abstract actions, subgoals, higher-level transition dynamics, but the description is rather incomplete. Shouldn\\u2019t these elements be presented in the methods? Or is it just a side result? If it is a side result, shouldn\\u2019t it be published in a side paper?\", \"In Section 4.4 the authors state that subgoal learning is left for future work, but in Section 4.5 there are 8 possible subgoals, we do not know where they come from. Again, the authors should have a clear problem statement in the beginning of Section 2 to delineate the problem they want to address and their assumptions, and then stick to the problem they have described.\", \"line 484 \\u201cWork on learning useful skills without hand-designed abstract actions (Eysenbach et al. (2018)) is ongoing.\\u201d Such a sentence should not appear in a results section. Maybe in future work, but the best is to get the corresponding results, then publish them.\", \"Figure 9(a), why are the episodic rewards decreasing BEFORE the goal changed? This needs to be commented upon.\", \"line 506: \\u201cThe method is inspired by the theory of the mammalian cortex\\u201d. If there was such a unique theory, I would be glad to know it. The authors probably mean \\u201cthe predictive processing theory...\\u201d, but they have to be aware that this is not the only theory. Furthermore, in the introduction where the authors shortly describe some elements of this \\u201ctheory\\u201d, they call upon various corresponding to various perspectives, I\\u2019m not sure we can consider the corresponding set of elements to constitute a theory. And again, the authors should compare themselves against other models that are inspired by these various elements of this \\u201ctheory\\u201d.\", \"Figure S10 should be moved into the main paper, if the main results are about Habitat 2.0\", \"# Typos, minor errors:\", \"the authors mention many times that their method uses unsupervised learning, but it seems more \\u201cself-supervised\\u201d, at it self corrects its prediction based on the posterior evidence.\", \"why not call your latent variables \\u201cz\\u201d, as many authors do?\", \"line 96: code snippets are promised in the Appendix, but I could not find them\", \"-line 142: \\u201cMention gridworld being top down POMDP. FIX\\u201d A good sign that the paper is not ready for publication\\u2026\", \"refer to equations using \\\"eqref{}\\\" rather than \\\"ref{}\\\"\", \"equations finishing a sentence should finish with a dot.\", \"line 182: a formulation in line with\\u2026 : what do the authors mean: that it vaguely ressembles...? the authors have to be more accurate.\", \"kalman \\u2192 Kalman\", \"line 255 two \\u2026 step(s)\", \"line 298 (amortized inference): make a sentence. What do you want to say?\", \"line 367 These new rooms \\u2026 demonstrates. Apart from the grammar issue, a new room does not demonstrate anything, the authors have to rephrase to make their point clear.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"nothing specific\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": [\"We sincerely thank the reviewer for their feedback on our paper. We were unable to perform the necessary comparisons and baselines, and have decided to withdraw the paper from consideration. Our intention is to resubmit an improved version of the paper that reflects the expected rigor and quality. We address the questions raised in the review:\", \"Timescales: We will make this aspect more clear since it is a cause for confusion. We have two variables **time steps**($\\\\tau$) and **timestamp**($t, T$). T is used as the higher level **timestamp**: $s_{t, T}$ represents the state at $t^{th}$ lower level timestamp and $T^{th}$ higher level timestamp. Figure 2 helps make this clear. Suppose we have an agent taking an action in the environment every step, we denote current \\u201clower-level\\u201d **timestamp** with $t$. However, the agent also maintains a \\u201chigher-level state\\u201d variable that updates every $\\\\tau$ steps instead of updating every step (similar to options in RL). In our experiments (both gridworld and Habitat), we show results with $\\\\tau =$ $10$ and $15$ (Figure 2, 5). This means that the higher-level variable (we use $s^{(2)}$ in the paper) updates every $10$ steps. This update is posed as an inference problem in the paper and hence, we say that the inference process runs for $\\\\tau$ steps. We will make this more clear in the next version of the paper.\", \"Planning Algorithm: The results are shown for the gridworld environment. For the RL baseline and the lower level Composer agent, we use off-the-shelf policy gradients with baselines, advantage estimates and gradient clipping. This approach is similar to PPO and allows for faster convergence. Planning is done via Model Predictive Control (Random Shooting). The lower level agent is given an oracle transition model and reward function which correctly predicts next states. The planning horizon for the lower level agent is 10. Composer uses the learnt higher level state model to plan, with a horizon of 3. We use $\\\\tau = 10$ here, but even if $\\\\tau=5$ was used, a horizon of 3 at the higher level would imply a lookahead of $5 \\\\times 3 = 15$ steps. This is the advantage of using hierarchical planning.\"]}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"We sincerely thank the reviewer for their feedback on our paper. We were unable to perform the necessary comparisons and baselines with DD-PPO, Dreamerv3 and TD-MPC2, and have decided to withdraw the paper from consideration. Our intention is to resubmit an improved version of the paper that reflects the expected rigor and quality. We address the questions raised in the review:\\n\\n1. Is the contrastive loss necessary for learning diverse representations? No, it is not necessary as shown in Figure 5. However, it depends on what kind of diversity is expected. Our original hierarchical model is meant to detect diverse dynamics without any supervision and contrastive loss. But in practical applications like navigation in a home environment (Habitat 2.0), the different dynamics might not be useful. For example, in Figure 4C, it is possible to color the plot in a way that makes the points separable in 2D. But these clusters might not make any sense in the environment considered. What is useful, is a cluster that we can make sense of - different rooms. It is not guaranteed that the rooms have different vision-based transition functions or similar functions $P(s_{t+1}|s_t, a_t)$ within the room. In these cases (ex: home robots), we can use contrastive loss to learn diverse representation.\\n2. Would labeling with VLM, as suggested in the paper, solve of the navigation tasks similarly to your abstracted method? Labeling with a VLM is complementary to our method. We can use the labels generated by a VLM to learn diverse transferable dynamics.\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"We sincerely thank the reviewer for their time and effort in providing thoughtful and constructive feedback on our paper. We found the suggestions and comments to be helpful in reiterating our work. However, we were unable to fulfill all the requests effectively on time and therefore have decided to withdraw the paper. We feel that addressing all the points raised (including Problem formulation, comparison and benchmarking against GateL0RD, THICK WMs) requires making significant changes to the paper that might be incompatible with the current version.\\n\\nWe are committed to thoroughly addressing your feedback and improving our presentation. Our intention is to resubmit the paper once it is more mature and would reflect the level of quality and rigor expected. Below, we have provided responses to some of the concerns raised in the review. While this may not address all the points, we hope it demonstrates our commitment to improving the manuscript: \\n\\n1. GateL0RD [1] and THICK World Models [2]: We thank the reviewer for bringing this line of work to our attention. We will use them as our baselines.\\n2. Supervised contrastive learning is not the crux of our method. It was an addition that served to show that our model can learn human-aligned abstractions - if required, (Figure 5, 7) similar to how human brains adapt from their surroundings by categorizing observations. The highlight of our paper is Figure 3, representing the top-down modulation that allow abstractions of transition functions. We will make this more clear in our revision.\\n3. In Figures 5 and 6, the (x, y) positions do not mean anything significant and we have found them to converge to different (x, y) locations during repeated runs. However, the relative distance remains. The Habitat examples in Figure 4 and 7 show similar behavior, except that in Figure 4 without the use of Supervised Contrastive Learning, the abstractions are not tightly packed in 2D space. This could be attributed to the noise in the dynamics arising from high dimensional RGBD images.\\n4. \\u201creplacing autoencoders \\u2026 with ViT \\u2026 is straightforward\\u201d: so why didn\\u2019t the authors do it? : Pretrained encoders on habitat were available as open-source model weights from [3]. We have mentioned this in Section 4.3. Since we are academic researchers with access to very few GPUs, we prioritized pretrained visual encoders that would otherwise require millions of frames and significant compute resources [3].\\n5. Why are the episodic rewards decreasing BEFORE the goal changed? Thank you for bringing this to our attention, we will correct the figure. We plot a running average of the rewards with a window of 150 steps, for clarity. The raw rewards are extremely noisy, as is the case for RL algorithms. The plots are left shifted by 150 for this reason. We have confirmed that the episodic rewards decrease only after changing the goal which is the expected behavior for an on-policy RL algorithm conditioned on a single goal. In fact, the 2nd goal change at 2000 episodes is closer to the initial goal which gives a small boost for the RL Agent.\\n\\n[1] Gumbsch Christian, et.al. (2021)\\n\\n[2] Gumbsch Christian, et. al. (2021)\\n\\n[3] Wijmans Erik, et.al. (2020)\"}",
"{\"summary\": \"## Compositional World Models with Interpretable Abstractions\\nThis paper introduces an instantiation of a world model that distinguishes between high-level and low-level dynamics. The introduced method Composer turns long horizon modeling tasks into a series of smaller tasks, guided by a high level abstraction. They generate hypernetworks, conditioned on a high level abstraction, that define low-level dynamics. Composer is tested on a variety of navigation tasks in toy domains like Gridworld and realistic domains like Habitat.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper is written clearly and motivates the need for abstracted world models. The usage of hypernetworks for this task is novel and an interesting approach to solving the hierarchical problem. The authors also provide interesting ablative experiments of their method in gridworld.\", \"weaknesses\": \"There are several key issues with this paper that prevent it from achieving a higher score.\\n1. **Contrastive loss as after an aside**: Section 2 describes the pipeline for learning abstract representations of state for the purpose of learning low-level dynamics. However, the losses in (7) and (9) alone might lead to a local optima where the abstract representation contains no information. Thus, the method relies on the contrastive loss in (11) to prevent collapse of the abstract representation. However, the paper is structured such that this nuance is lost and that the contrastive loss is used solely for scaling. \\n2. **experiments do not indicate that abstraction is necessary**: The main task that is used in the experiments is navigation. However, the underlying dynamics in experimental domains do not change from room to room. To my knowledge, the Habitat navigation tasks used in this paper do not significantly benefit from the abstraction described here. Mainly, language-navigation tasks (such as \\\"Go pick up the tooth brush\\\" do require abstraction because room abstractions help condition the nav policy. It it imperative that the authors clarify why their chosen navigation tasks do in fact require a significant amount of abstraction.\\n3. **Usage of neuroscience**: The ideas in the paper can stand alone without the motivation of mammalian neuroscience. I would suggest removing the neuroscience oriented text because it does not add to the content of this paper. \\n4. **Experiments do not include significant baselines**: The purpose of learning compositional world models is to learn policies that are more generalizable and efficient than other model-free or model-based methods. However, from the experiments, this is not clear. I can only garner qualitative attributes of the method from the experiments. It is important to compare Composer with other world model and model-free policy learning methods. As such, I suggest at least comparing against DD-PPO (a baseline already in Habitat-lab), DreamerV3 and TD-MPC2. Or equivalent baselines if these are unsuited to your tasks.\", \"minor\": \"Line 142 seems to have text not meant for the submitted manuscript.\", \"questions\": \"In addition to addressing my comments in the Weakness section I would like the following questions answered.\\n\\n1. Is the contrastive loss necessary for learning diverse representations?\\n2. Would labeling with VLM, as suggested in the paper, solve of the navigation tasks similarly to your abstracted method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes a worldmodel architecture that uses hypermodels and runtime inference for learning abstractions. For realistic environments, additional supervision in terms of sparsely labeled location-specification was needed to obtain a reasonable abstract clustering. The system is tested in a toy gridworld and in Habitat 2.0. The model can be used for highlevel planning and reduces the planning overhead drastically.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"interesting and reasonable hierarchical model architecture\", \"analysis of the method on Habitat, so realistic environment\", \"good visualizations\", \"visualizations and insights into latent representations are given\"], \"weaknesses\": [\"the paper is not very well written (partially unfinished)\", \"no fair baselines, the method is not compared to Dreamer or TDMPC or THICK world-models ( https://openreview.net/forum?id=TjCDNssXKU )\", \"no action incorporation into the higher highlevel\", \"the name of the method is misleading: I see only one small evidence of compositionallity with the two trivial gridwords, but other than that the architecture has no particular bias towards creating compositional structures and I would expect a much stronger empirical evidence if you want to claim that compositionallity emerges.\", \"It is unfortunate the paper is not really carefully edited before submission. There are some unfinished sentences and missing glue in the paper.\", \"I like the overall approach, but from what is presented here, it does not seem ready yet. Fair comparisons and ablation studies are missing.\", \"many small details: Letters are reused or mixed up: Example $\\\\tau$ is used for time scale, temperature and also on one case for the inner inference iterations, but called $K$ in the caption of Fig 3.\"], \"questions\": [\"I did not understand exactly how the timescales interact. You use T for the high-level timescale, but it is not clear to me when this is updated. Do you make the inference only every $\\\\tau$ steps are every step?\", \"which exact algorithms are used for Fig 9? What is the planning horizon for the planners?\", \"Fig 8: I would expect a comparison between the case with high-level model and without.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We sincerely thank all the reviewers for their time and effort in reading our paper and giving critical feedback. Overall, we feel that addressing all the issues raised (formulation, comparison and benchmarking against Dreamerv3, TDMPC2, THICK WMs, DD-PPO) requires making significant changes to the structure of paper that might be incompatible with the current version. Hence, we wish to withdraw the paper: Our intention is to resubmit the paper once it is satisfactory and would reflect the level of quality and rigor expected by the ICLR community.\"}"
]
} |
|
EHhLLmDvtE | Subtle Errors Matter: Preference Learning via Error-injected Self-editing | [
"Kaishuai Xu",
"Tiezheng YU",
"Wenjun Hou",
"Yi Cheng",
"Chak Tou Leong",
"Liangyou Li",
"Xin Jiang",
"Lifeng Shang",
"Qun Liu",
"Wenjie Li"
] | Large Language Models (LLMs) have exhibited strong mathematical reasoning and computational prowess, tackling tasks ranging from basic arithmetic to advanced competition-level problems. However, frequently occurring subtle errors, such as miscalculations or incorrect substitutions, limit the models’ full mathematical potential. Existing studies to improve mathematical ability typically involve distilling reasoning skills from stronger LLMs or applying preference learning to step-wise response pairs. Although these methods leverage samples of varying granularity to mitigate reasoning errors, they overlook the frequently occurring subtle errors. A major reason is that sampled preference pairs involve differences unrelated to the errors, which may distract the model from focusing on subtle errors. In this work, we propose a novel preference learning framework called eRror-Injected Self-Editing (RISE), which injects predefined subtle errors into partial tokens of correct solutions to construct hard pairs for error mitigation. In detail, RISE uses the model itself to edit a small number of tokens in the solution, injecting designed subtle errors. Then, pairs composed of self-edited solutions and their corresponding correct ones, along with pairs of correct and incorrect solutions obtained through sampling, are used together for subtle error-aware DPO training. Compared with other preference learning methods, RISE further refines the training objective to focus on predefined errors and their tokens, without requiring fine-grained sampling or preference annotation. Extensive experiments validate the effectiveness of RISE, with preference learning on Qwen2-7B-Instruct yielding notable improvements of 3.0% on GSM8K and 7.9% on MATH. | [
"Mathematical Reasoning",
"Preference Learning"
] | Reject | https://openreview.net/pdf?id=EHhLLmDvtE | https://openreview.net/forum?id=EHhLLmDvtE | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"z9LyYqr9s8",
"wIKSJcE1Pu",
"vMGkEwqwX6",
"tjzH1R7Qhp",
"tIdQyfk5Qo",
"s65qkrStiL",
"rJpFeaHH0i",
"qyZVS62Tmn",
"ofhS7tKP39",
"oGcCsKnF1t",
"o9AfebfkAL",
"k7FH8TxzqN",
"jlwaxWu5h0",
"hMryszJDYH",
"g58sN6xqMY",
"eVtVRBhLOh",
"eBnjZQCImX",
"dToRg5C3kM",
"cTKqLQ4cft",
"b9jUxcdtRS",
"YR2KPt2BZS",
"YJFiZP2x7K",
"XQkgqmggea",
"W20gP17cMJ",
"V1leD0kmtl",
"N5X2wW0Lpe",
"L0oyWdz0eW",
"Hxyv6IR5Y8",
"Hf1o6pWGGt",
"GwDQNeKweH",
"G1OvwcyK2U",
"EhCGk8unIB",
"EHoU2rJClh",
"BFi3z8YU9q",
"5aVrnvVEDs",
"5QJF4lW6Jx",
"2bu6tIekjv",
"29v7TsmcGw",
"1Wify0eBxY",
"0cmOiLXSa8"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment"
],
"note_created": [
1732658862352,
1733121582387,
1732209373483,
1732204993979,
1732205517635,
1732469644610,
1732721037316,
1733190743114,
1732658829207,
1732203599906,
1730066253905,
1733121497839,
1735009411990,
1732547487470,
1732204463979,
1732468640048,
1732259274799,
1732547104758,
1732244120872,
1732263351556,
1732468458859,
1732658799184,
1732202718392,
1730601699574,
1732547330780,
1730590601847,
1732468520040,
1732208787971,
1732261289408,
1733190985937,
1730696503264,
1733280975804,
1732507359589,
1732637398587,
1732609479062,
1732262589913,
1733190857759,
1732547213335,
1737524094285,
1733149952564
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10957/Area_Chair_t9mg"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Area_Chair_t9mg"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Reviewer_r3F2"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Area_Chair_t9mg"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Area_Chair_t9mg"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Reviewer_kiSm"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Reviewer_nbYg"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Reviewer_CRH2"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Reviewer_nbYg"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission10957/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Dear reviewer r3F2,\\n\\nCould you please respond to authors' rebuttal and see if you would like to update your review? Thanks very much!\\n\\nAC\"}",
"{\"title\": \"Request for Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer r3F2,\\n\\\\\\n\\\\\\nI hope this email finds you well. As the rebuttal period is approaching its conclusion, we wanted to kindly follow up regarding your feedback on our submission. Your insights are incredibly important to us, and we would greatly appreciate it if you could share your response at your earliest convenience.\\n\\nWe have carefully addressed the points raised in the initial reviews and conducted additional experiments to strengthen our submission. We believe these improvements significantly enhance the quality of the work, and we hope you might take them into consideration when reassessing the paper and its score.\\n\\nThank you very much for your time, effort, and thoughtful evaluation. Please let us know if there are any further concerns or questions we can address to assist you.\\n\\\\\\n\\\\\\nAuthors of Paper #10957\"}",
"{\"title\": \"Response to Official Review by Reviewer r3F2 (2/3)\", \"comment\": \"**2. Capturing the Full Spectrum of Potential Errors**\\n\\nWe agree that pre-defined templates may have limitations in capturing the full spectrum of potential errors. To further illustrate that our approach has the potential to be generalized to more diverse errors, we implement another experiment with a more universal prompt template. The prompt template is as follows:\\n\\n\\\"*Edit the current step to introduce an error. Do not state that errors have been made.*\\\"\\n\\nThis prompt doesn\\u2019t indicate any error types and leverages the LLM itself to randomly introduce an error, which can capture broader spectrum error types. More importantly, this prompt can introduce arbitrary errors and even unexposed errors. Preliminary results on Qwen2-7B-Instruct with these self-edited samples are shown as follows:\\n\\n|Method | GSM8K | MATH|\\n| :-----| :----: | :----: |\\n|RISE-prompt-pre-defined-error |88.4| 59.9|\\n|RISE-prompt-arbitrary-error|88.3|59.7|\\n\\nThe results show a similar significant improvement compared with the results on our pre-defined prompt templates. \\n\\nBesides, we consider creating prompt templates with more error types by comprehensive GPT-4o-based error analysis. We can use the self-instruct method with the analyzed errors to create prompt templates automatically. However, our experiments have already demonstrated the feasibility of this framework, as the error types used in our experiments are systematically identified and summarized through GPT-4o-based error analysis.\"}",
"{\"title\": \"Response to Official Review by Reviewer kiSm (3/4)\", \"comment\": \"**3. Experiments on More Open-Source Models**\\n\\nWe appreciate this suggestion and agree that evaluating RISE on a broader range of open-source models could further validate its effectiveness. We implement additional experiments on Ministral-8B-Instruct and Qwen2.5-7B-Instruct, as these models are the most recent and well-regarded for their performance in various reasoning tasks. Preliminary results on these models show the effectiveness of our framework as follows:\\n\\n| Method | GSM8K | MATH |\\n| :-----| :----: | :----: |\\n| Ministral-8B-Instruct-2410 | 86.35 | 53.62 |\\n| Ministral-8B-DPO | 86.95 | 54.18 |\\n| Ministral-8B-RISE | **88.62** | **54.86** |\\n\\n| Method | GSM8K | MATH |\\n| :-----| :----: | :----: |\\n|Qwen2.5-7b-Instruct| 91.81 | 74.36 |\\n|Qwen2.5-7b-DPO| 92.49 | 75.00 |\\n|Qwen2.5-7b-RISE| **92.95** | **75.06** |\\n\\nWe can observe that RISE significantly improves the mathematical performance of both Ministral-8B-Instruct-2410 and Qwen2.5-7b-instruct. Especially for the Ministral model, the accuracy on GSM8K increases a lot. Both models have stable improvements on GSM8K and MATH. We will include the full results in the revised paper.\"}",
"{\"title\": \"Response to Official Review by Reviewer kiSm (4/4)\", \"comment\": \"**4. Number of Self-Edited Pairs**\\n\\nWe appreciate this important observation and agree that further analysis is needed to explain the performance decline with more self-edited pairs. Our initial hypothesis is that introducing too many self-edited pairs may overwhelm the model with too many similar pairs, potentially leading to overfitting on certain patterns, since self-edited pairs from one problem share a large portion of context. \\n\\nTo some extent, increasing self-editing pairs has a similar effect to increasing the number of training epochs under one self-editing pair, and more pairs will lead to overfitting. Thus, we may pre-set the number of self-edited pairs depending on the selection of training epochs and add more pairs if the model requires more training epochs under general DPO training.\"}",
"{\"title\": \"Influence of Hyperparameter $\\\\alpha$\", \"comment\": \"We compare different values \\u200b\\u200bof the hyperparameter $\\\\alpha$. The results are shown as follows:\\n\\n| $\\\\alpha$ | 0.01 | 0.05 | 0.1 | 0.2 |\\n| :-----| :----: | :----: | :----: | :----: |\\n| GSM8K | 88.5 | 88.4 | 87.9 | 87.7 |\\n| MATH | 59.3 | 59.9 | 59.6 | 59.3 |\\n\\nAn excessively large $\\\\alpha$ may reduce the model's generalization ability, which in turn results in lower accuracy on GSM8K and MATH.\"}",
"{\"title\": \"Response to Reviewer nbYg\", \"comment\": \"Thank you very much for your kind reply! We are currently conducting experiments on Llama-3.1-70B-Instruct and Qwen2-72B-Instruct again. In comparison to the 7B-level experiments (k=5), we are significantly increasing the number of sampling attempts (k=50) to generate pairs for a larger number of problems. We hope these experiments will address your concerns about the performance limitations of the larger model.\\n\\nDue to equipment limitations, these experiments may require some additional time, but we will share the results as soon as they are available.\\n\\\\\\n\\\\\\nSincerely,\\n\\nAuthors of Paper #10957\"}",
"{\"title\": \"Request for Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer CRH2,\\n\\\\\\n\\\\\\nThe rebuttal period ends in just a few hours, and we kindly ask for your feedback. We have carefully addressed your concerns and conducted additional experiments to strengthen our submission. We hope you will consider these improvements when reassessing the paper and its score.\\n\\nThank you very much for your time and consideration!\\n\\\\\\n\\\\\\nAuthors of Paper #10957\"}",
"{\"comment\": \"Dear reviewer kiSm,\\n\\nCould you please respond to authors' rebuttal and see if you would like to update your review? Thanks very much!\\n\\nAC\"}",
"{\"title\": \"Response to Official Review by Reviewer kiSm (1/4)\", \"comment\": \"Thank you for your detailed review and thoughtful feedback. We appreciate your recognition of our work and would like to address the concerns and questions you raised:\\n\\n**1. Generalizability and Domain-Specificity**\\n\\nWe agree that expanding the evaluation to additional domains would provide a more comprehensive understanding of RISE's generalizability. We apply our RISE to other domains such as code generation, where subtle mistake detection is essential. The editing prompt is as above:\\n\\n\\\"*Edit the current step to introduce an error. Do not state that errors have been made.*\\\"\\n\\nThis prompt can introduce arbitrary errors and can be adapted to other domains easily. I am still processing the code dataset, and the detailed results will be released soon. Other relevant experiments that use such a universal prompt on our mathematical dataset have been conducted, and the results demonstrate the effectiveness of our RISE (refer to the (2/4) response).\"}",
"{\"summary\": \"This paper proposes a novel approach to enhancing the performance of language models in mathematical problem-solving by introducing noise into correct solutions to create subtly incorrect answers. These correct/incorrect answer pairs are then utilized to fine-tune a DPO (Direct Preference Optimization) model. Empirical evidence demonstrates the effectiveness of this approach in improving the model\\u2019s accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper effectively addresses the challenge of subtle error detection, a common issue faced by large language models (LLMs). The proposed method successfully enhances models, transforming relatively weaker ones into stronger versions.\\n1. The technique leverages the simplicity of generating incorrect solutions rather than correct ones, making the training process more efficient. By instructing models to produce errors, it harnesses the LLM\\u2019s capability to identify likely mistakes based on its own tendencies, leading to a self-improvement mechanism.\", \"weaknesses\": \"1. The effectiveness of this method heavily relies on prompt engineering. The quality and specificity of the prompts used to generate incorrect answers significantly influence the quality and subtlety of the generated mistakes.\\n1. As the approach is based on pre-defined templates, it may not capture the full spectrum of potential errors a language model might make, leaving certain blind spots unaddressed.\\n1. The scalability of the proposed method remains uncertain, as it primarily focuses on generating diverse incorrect answers rather than ensuring diversity in correct solutions. This might limit its applicability in more complex or varied scenarios.\", \"questions\": \"1. Do you think this method can be extended to other domains where subtle mistake detection is crucial, such as logical reasoning and programming? If so, what adaptations would be necessary?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Request for Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer CRH2,\\n\\\\\\n\\\\\\nI hope this message finds you well. As the rebuttal period is nearing its conclusion, we wanted to kindly follow up regarding your feedback on our submission. We highly value your insights and would greatly appreciate it if you could provide your response at your earliest convenience.\\n\\nWe have worked diligently to address the concerns raised in the reviews and believe the updates and additional experiments we've conducted strengthen our submission significantly. We hope you might consider these improvements when evaluating the paper and its potential for a higher score.\\n\\nThank you very much for your time and consideration. Please let us know if you have any further questions or concerns that we can address.\\n\\\\\\n\\\\\\nAuthors of Paper #10957\"}",
"{\"metareview\": \"The paper presented an interesting method of generating hard negative preference pair construction through error-injected self-editing. The proposed method enhances the mathematical reasoning capability of LLMs by subtle error-aware DPO training.\", \"strength\": \"1. A useful method to generating hard negative examples to improve the reasoning capabilities of LLMs.\\n2. The method appeared to be novel, though the novelty might not be significant (An interesting negative sampling approach).\", \"weakness\": \"1. The improvement over existing methods is not very significant.\\n2. Method is heavily depending on prompt engineering and templates, which could limit its general applicability.\\n\\nThe paper is borderline. My major concern (similar to reviewer nbYg) is that the improvement on larger models is relatively minor (in the noisy region), signaling the method might not scale well. This could be seen in other tasks other than Math as questioned by reviewer CRH2.\", \"additional_comments_on_reviewer_discussion\": \"The original support of paper is pretty lukewarm. Most of reviewers did not participate the discussion, unfortunately. In making the decision, AC considered the authors' rebuttal fully and found the overall improvement has not met the publication bar for ICLR.\"}",
"{\"title\": \"Request for Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer nbYg,\\n\\\\\\n\\\\\\nWe hope this message finds you well. Thank you again for your thoughtful feedback on our submission.\\n\\nAs the rebuttal period is coming to a close, we wanted to kindly follow up to request your feedback on our responses. We have made every effort to address your concerns through detailed clarifications, and we would greatly appreciate hearing whether our rebuttal has resolved your questions.\\n\\nIf you have any remaining concerns or require further clarification, we would be happy to address them before the rebuttal deadline. Thank you again for your time and effort in reviewing our work.\\n\\\\\\n\\\\\\nSincerely,\\n\\nAuthors of Paper #10957\"}",
"{\"title\": \"Response to Official Review by Reviewer kiSm (2/4)\", \"comment\": \"**2. Diversity of Error Types & Interaction of Different Error Types**\\n\\nTo illustrate that our approach has the potential to be generalized to more diverse errors, we implement another experiment with a more universal prompt template. The prompt template is as follows:\\n\\n\\\"*Edit the current step to introduce an error. Do not state that errors have been made.*\\\"\\n\\nThis prompt doesn\\u2019t indicate any error types and leverages the LLM itself to randomly introduce an error, which can capture broader spectrum error types. More importantly, this prompt can introduce arbitrary errors. Preliminary results on Qwen2-7B-Instruct with these self-edited samples are shown as follows:\\n\\n| Method | GSM8K | MATH |\\n| :-----| :----: | :----: |\\n| RISE-prompt-pre-defined-error | 88.4 | 59.9 |\\n| RISE-prompt-arbitrary-error | 88.3 | 59.7 |\\n\\nThe results show a similar significant improvement compared with the results on our pre-defined prompt templates. However, as our injected errors are based on comprehensive error analysis and are better aligned with the practical situation, the performance with pre-defined error prompts is slightly better than that with arbitrary error.\\n\\nAs for the interaction of different error types, we believe that the interaction of different error types could indeed improve performance by making the training data more diverse and challenging, potentially helping models learn to generalize better. We create a prompt with the interaction of different errors and conduct another experiment. Preliminary results on Qwen2-7B-Instruct with these self-edited samples are shown as follows:\\n\\n| Method | GSM8K | MATH |\\n| :-----| :----: | :----: |\\n| RISE-prompt-single-error | 88.4 | 59.9 |\\n| RISE-prompt-combination-error | 88.7 | 60.0 |\\n\\nWe can see that these compound error-injected samples help the model further improve the mathematical performance.\"}",
"{\"title\": \"Response to Official Review by Reviewer CRH2 (3/3)\", \"comment\": \"**3. Adaptation to code generation**\\n\\nWe apply our RISE method to code generation, where avoiding subtle errors is critical. Following [1], we adopt the LeetCode dataset [2] to conduct training. The dataset includes around 2K leetcode tasks in the medium and hard levels. For the Qwen2-7B-Instruct model, we sample 50 times and finally obtain 1473 pairs of chosen and rejected samples for training. The preliminary results are shown as follows:\\n\\n| Method | MBPP | Humaneval |\\n| :-----| :----: | :----: |\\n| Qwen2-7B-Instruct | 42.2 | 43.9 |\\n| Qwen2-7B-DPO | 43.4 | 46.3 |\\n| Qwen2-7B-RISE | **44.2** | **47.6** |\\n\\nWe can observe that our RISE performs better than the general DPO method, achieving a 0.8% improvement on the MBPP test set and a 1.3% improvement on the Humaneval test set. Considering that the current solution editing strategy has not yet been adjusted to account for the characteristics of code generation, there is still certain room for improvement in the results.\\n\\n[1] Xu, Bin, Yiguan Lin, and Yinghao Li. \\\"SRA-MCTS: Self-driven Reasoning Aurmentation with Monte Carlo Tree Search for Enhanced Code Generation.\\\" arXiv preprint arXiv:2411.11053 (2024).\\n\\n[2] https://huggingface.co/datasets/greengerong/leetcode\"}",
"{\"title\": \"Response to Official Review by Reviewer nbYg (1/3)\", \"comment\": \"Thank you for your detailed review and constructive feedback. We appreciate your valuable comments and would like to address the concerns you raised:\\n\\n**1. Motivation and Impact of Random Rejected Samples**\\n\\nWe would like to clarify that we do not claim that using randomly sampled negative solutions (rejected samples) harms the model's mathematical reasoning ability. On the contrary, our ablation experiments (- w/o self-edited pairs: GSM8K: 88.3 (+2.9), MATH: 58.2 (+6.0)) demonstrate that even using only randomly sampled negative solutions can help the model achieve correct solutions more consistently. Randomly sampled rejected samples are effective for preference learning since they do involve wrong answers. Furthermore, our method constructs self-edited pairs and supplements them into the existing randomly sampled rejected samples. These pairs, as finer-grained preference pairs, further enhance the model's ability to reduce small errors (RISE-QWEN2-7B: GSM8K: 88.4 (+3.0), MATH: 59.9 (+7.7)).\\n\\nOur motivation is that randomly sampled rejected samples make it difficult for preference learning to focus on subtle errors, as the differences between chosen and rejected samples often include content unrelated to errors. Our self-edited pairs can make up for this deficiency. A randomly sampled pair is shown below:\\n\\n**Chosen:**\\n\\n\\\"Let's think step by step.\", \"step_1\": \"Determine the total hours one pencil can last.\\nSince Jenine can sharpen a pencil 5 times before it becomes unusable and she needs to sharpen a pencil every 1.5 hours of use, one pencil can last for \\\\(5 - 1.5 = 3.5\\\\) hours.\\\"\\n\\nFrom the above two samples, we can observe that it is easier for the model to focus on the subtle but key errors in the generated samples, as there is only a symbol difference (\\\"\\\\times\\\" <-> \\\"-\\\") between them. Thus, we believe that our motivation is reasonable, as demonstrated by our comprehensive experiments.\", \"step_2\": \"We can multiply the first two factors to get \\\\[(x^2 + 2x + 17)(x - r) = 0.\\\\] Expanding this product, we get \\\\[x^3 + (2 - r)x^2 + (17 - 2r)x - 17r = 0.\\\\] Comparing this with the given polynomial, we see that \\\\[a = 2 - r, \\\\quad b = 17 - 2r, \\\\quad \\\\text{and} \\\\quad -17r = 31.\\\\] From the last equation, we get $r = -\\\\frac{31}{17} = -\\\\frac{31}{17}$.\", \"step_3\": \"Substituting $r = -\\\\frac{31}{17}$ into the equation $a = 2 - r$, we get \\\\[a = 2 - \\\\left(-\\\\frac{31}{17}\\\\right) = 2 + \\\\frac{31}{17} = \\\\frac{34}{17} + \\\\frac{31}{17} = \\\\frac{65}{17}.\\\\] Therefore, the value of $a$ is $\\\\boxed{\\\\frac{65}{17}}$. The answer is: $\\\\boxed{\\\\frac{65}{17}}$\\\"\\n\\nFrom the above two samples, it is hard to directly locate errors in the rejected by comparing the chosen and rejected samples token by token, as there is a significant difference in the expression of these two samples. However, our self-edited pairs can help the model focus on subtle errors more conveniently. A self-edited pair is shown below:\\n\\n**Chosen:**\\n\\n\\\"Let's think step by step.\", \"step_4\": \"Calculate the product of the first two roots:\\n\\\\[r_1 \\\\cdot r_2 = (-1 - 4\\\\sqrt{2})(-1 + 4\\\\sqrt{2}) = 1 - (4\\\\sqrt{2})^2 = 1 - 32 = -31\\\\]\", \"step_5\": \"Knowing that the product of the roots is $-31$, and the product of the first two roots is $-31$, we can deduce that $r_3 = 1$.\", \"step_6\": \"Now, calculate the sum of the roots:\\n\\\\[r_1 + r_2 + r_3 = (-1 - 4\\\\sqrt{2}) + (-1 + 4\\\\sqrt{2}) + 1 = -1 - 1 + 1 = -1\\\\]\", \"step_7\": \"According to Vieta's formulas, the sum of the roots is $-a$. Therefore:\\n\\\\[-a = -1 \\\\Rightarrow a = 1\\\\]\\n\\nSo, the value of $a$ is $\\\\boxed{1}$.\\\"\\n\\n**Rejected:**\\n\\n\\\"Let's think step by step.\"}",
"{\"title\": \"Request for Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer CRH2,\\n\\\\\\n\\\\\\nWe hope this email finds you well. Thank you again for your thoughtful review and valuable feedback on our submission.\\n\\nAs the rebuttal period is coming to an end, we wanted to kindly follow up to ask if you could provide any feedback on our rebuttal. We have worked hard to address your concerns and questions through detailed responses and additional experiments, and we would greatly appreciate your thoughts on whether our clarifications have sufficiently addressed your concerns.\\n\\nIf there are any remaining issues needing further clarification, we would be more than happy to engage before the rebuttal period concludes. Thank you again for your time and consideration, and we truly appreciate your efforts in reviewing our work.\\n\\\\\\n\\\\\\nSincerely,\\n\\nAuthors of Paper #10957\"}",
"{\"title\": \"Response to Official Review by Reviewer r3F2 (3/3)\", \"comment\": \"**3. Scalability of the Method & Applicability to Other Domains**\\n\\nWe appreciate this observation and agree that scalability is an important consideration. A key algorithmic feature of our method, as well as other DPO-like methods, is the in-distribution sampling of chosen or rejected solutions from the policy model's learned distribution, followed by targeted in-distribution optimization to better align the model's responses with human preferences. Consequently, these methods may not have originally been designed to improve response diversity. Our method, in particular, primarily enhances the model's ability to minimize subtle errors, enabling it to consistently arrive at correct solutions with greater reliability. \\n\\nWe believe that our method can be applied in more complex or varied scenarios and be extended to other domains, such as logical reasoning and programming, where subtle mistake detection is essential. Given the results of self-editing with an extremely simple prompt (\\u201cintroduce an error\\u201d), we seamlessly implement our RISE on the code generation task. The prompt is the same as above:\\n\\n\\\"*Edit the current step to introduce an error. Do not state that errors have been made.*\\\"\\n\\nThis prompt can introduce arbitrary errors and can be adapted to other domains such as code generation easily. I am still processing the code dataset, and the detailed results will be released soon.\"}",
"{\"title\": \"Response to Official Review by Reviewer CRH2 (2/3)\", \"comment\": \"**2. Evaluation of other tasks**\\n\\nWe appreciate the suggestion to include more diverse tasks in our evaluation. While the primary focus of our work was on math-related tasks, we agree that testing on truly out-of-distribution datasets would further strengthen the generalizability claims of our method.\\nAs suggested, we are expanding our evaluation to ZebraLogic (Puzzle Acc and Cell Acc), MBPP, and Humaneval to assess the model's performance on logic-based tasks and code generation. The results are shown as follows:\\n\\n|Method|Puzzle Acc|Cell Acc|MBPP|Humaneval|\\n| :-----| :----: | :----: |:----: |:----: |\\n|Qwen2-7B-Instruct| 8.11|21.49|42.2|43.9|\\n|Qwen2-7B-DPO| 8.10|20.82|42.0|45.1|\\n|Qwen2-7B-RISE| **8.40**|**23.24**|**42.4**|**47.5**|\\n\\nWe can observe that even without training on the above two tasks, our RISE outperforms both the original instruct model and DPO-tuned model. These results demonstrate the strong generalization of our RISE to out-of-domain tasks. The evaluation algorithm is based on two public evaluation repos [1] and [2].\\n\\n[1] https://github.com/bigcode-project/bigcode-evaluation-harness\\n\\n[2] https://github.com/WildEval/ZeroEval\"}",
"{\"title\": \"1. Generalizability and Domain-Specificity\", \"comment\": \"We apply our RISE method to code generation, where avoiding subtle errors is critical. Following [1], we adopt the LeetCode dataset [2] to conduct training. The dataset includes around 2K leetcode tasks in the medium and hard levels. For the Qwen2-7B-Instruct model, we sample 50 times and finally obtain 1473 pairs of chosen and rejected samples for training. The preliminary results are shown as follows:\\n\\n| Method | MBPP | Humaneval |\\n| :-----| :----: | :----: |\\n| Qwen2-7B-Instruct | 42.2 | 43.9 |\\n| Qwen2-7B-DPO | 43.4 | 46.3 |\\n| Qwen2-7B-RISE | **44.2** | **47.6** |\\n\\nWe can observe that our RISE performs better than the general DPO method, achieving a 0.8% improvement on the MBPP test set and a 1.3% improvement on the Humaneval test set. Considering that the current solution editing strategy has not yet been adjusted to account for the characteristics of code generation, there is still certain room for improvement in the results.\\n\\n[1] Xu, Bin, Yiguan Lin, and Yinghao Li. \\\"SRA-MCTS: Self-driven Reasoning Aurmentation with Monte Carlo Tree Search for Enhanced Code Generation.\\\" arXiv preprint arXiv:2411.11053 (2024).\\n\\n[2] https://huggingface.co/datasets/greengerong/leetcode\"}",
"{\"comment\": \"Dear reviewer CRH2,\\n\\nCould you please respond to authors' rebuttal and see if you would like to update your review? Thanks very much!\\n\\nAC\"}",
"{\"title\": \"Response to Official Review by Reviewer CRH2 (1/3)\", \"comment\": \"Thanks a lot for your detailed review and constructive feedback. We appreciate your valuable comments and would like to address the concerns you raised:\\n\\n**1. Scope of Performance Evaluation and Dataset Selection**\\n\\nWe understand the concern regarding the potential limitations of using a single dataset for training. Our choice of the Lai et al. (2024) dataset was motivated by two factors: (1) the good results achieved in that paper using this dataset, and (2) the convenience it provides in allowing direct comparison with methods in Lai et al. (2024).\\n\\nNonetheless, we agree that evaluating on a broader set of datasets could provide more insight into the generalizability of our approach. To address this, we have implemented additional experiments using other mathematical datasets, including problems from the original training sets of the GSM8K [1] and MATH [2] datasets. We collect 15K problems like DART-math [3] to conduct RISE training. Preliminary results on Qwen2-7B-Instruct indicate that our RISE framework achieves better performance than the general DPO method:\\n\\n| Method | GSM8K | MATH |\\n| :----- | :----: | :----: |\\n| Qwen2-7B-Instruct | 85.4 | 52.2 |\\n| Qwen2-7B-DPO | 87.7 | 57.5 |\\n| Qwen2-7B-RISE | **88.6** | **58.5** |\\n\\nWe will include more detailed results on other math evaluation datasets later.\\n\\n[1] Cobbe, Karl, et al. \\\"Training verifiers to solve math word problems.\\\" arXiv preprint arXiv:2110.14168 (2021).\\n\\n[2] Hendrycks, Dan, et al. \\\"Measuring mathematical problem solving with the math dataset.\\\" arXiv preprint arXiv:2103.03874 (2021).\\n\\n[3] Tong, Yuxuan, et al. \\\"Dart-math: Difficulty-aware rejection tuning for mathematical problem-solving.\\\" arXiv preprint arXiv:2407.13690 (2024).\"}",
"{\"summary\": \"The paper presents a novel preference learning framework known as eRror-Injected Self-Editing (RISE), which is designed to enhance the mathematical reasoning capabilities of Large Language Models (LLMs). The core contribution of RISE is its innovative approach to error mitigation by injecting subtle, predefined errors into correct solutions, creating hard pairs for training that help the model focus on common mistake patterns.\\nThe framework operates by using the LLM to generate correct multi-step solutions and then intentionally introducing errors into these solutions to form self-edited pairs. These pairs, along with correctly and incorrectly solved samples, are used for subtle error-aware Direct Preference Optimization (DPO) training. The paper reports improvements on mathematical reasoning tasks when RISE is applied to LLMs like Qwen2-7B-Instruct, and shows improvement on the GSM8K and MATH dataset. These results demonstrate the effectiveness of RISE in refining the training objective to target subtle error tokens and in improving the model's ability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Originality: The paper introduces RISE, a novel preference learning framework that addresses the subtle error problem in LLMs, which is not commonly seen in other works.\\n2. Quality: The paper compares RISE with other preference learning methods and shows that it outperforms them, which shows the quality of the approach.\\n3. Significance: The paper makes a significant contribution to the field of LLMs by providing a method to improve their mathematical reasoning capabilities, which is a critical area for LLM development.\", \"weaknesses\": \"1. Generalizability and Domain-Specificity: The paper focuses exclusively on mathematical reasoning tasks. It would be beneficial to see how RISE performs in other domains, such as reasoning and coding, where subtle errors also play a significant role.\\n2. Diversity of Error Types: While the paper addresses common error types in mathematical reasoning, it may not cover all possible error categories. For instance, it might be worth exploring errors related to the interaction of different error types.\\n3. Experiments: The authors should evaluate the performance of RICE on more open-source models (such as Mistral-7B-Instruct-v0.3, qwen 2.5 series). And the influence of hyperparameter $\\\\alpha$ should also be explored.\", \"questions\": \"1. The interaction of different error types: Will the interaction of different error types improve the performance?\\n2. Number of self-edited pairs: Why the performance on GSM8K and MATH both decrease with more self-edited pairs? Please further analyse this pheonomenon to pre-define the number of self-edited pairs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Request for Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer r3F2,\\n\\\\\\n\\\\\\nWe hope this email finds you well. Thank you again for your thoughtful review and constructive feedback on our submission.\\n\\nAs the rebuttal period is approaching its end, we wanted to kindly follow up to request your feedback on our responses. We have worked hard to address your concerns through detailed explanations and additional experiments, and we would greatly appreciate hearing your thoughts on whether our rebuttal has adequately addressed your questions.\\n\\nIf you have any remaining concerns or need further clarification, we would be happy to provide additional details before the rebuttal deadline. Thank you again for your time and invaluable contributions to improving our work.\\n\\\\\\n\\\\\\nSincerely,\\n\\nAuthors of Paper #10957\"}",
"{\"summary\": \"This paper presents a model fine-tuning method that aims to solve the math problem of LLMs. Specifically, the paper reuses the existing training paradigm (RLFH or DPO concretely) but pays attention to the training pair generation. The negative sample is generated by prompting LLM with intentional instruction on producing wrong answers (with particular error types described in Section 2.1). The main motivation comes from the hypothesis that the existing fine-tuning solution does not provide targetted training pairs, causing the fine-tuned model to capture subtle errors that are not intended.\\n\\nThe training objectives are borrowed from previous works, including DPO, step-wise DPO, and negative log-likelihood loss (to stabilize the training). The contribution of this paper is more about how to get better negative samples. Experiments show the proposed approach provides reasonable improvement on small models (7B) but with limited improvement on large models (70B). Compared to the DPO solution previously used, the improvement is also not very significant.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and easy to understand. The majority of the descriptions are with clear equations or demonstrative figure supports. While some contents are not included in the paper, they are well-known in the literature.\\n\\nActively looking for hard negative examples is always a research topic. The paper focuses on generating regularized negatives (from the list of error types) such as enforcing the fine-tuned model to avoid making similar mistakes. The strategy is letting LLM provide the wrong examples as part of forward passing over steps. \\n\\nExperiments included multiple LLMs in comparison. While most of them are not intended to solve math problems, it is good to have more things to compare.\", \"weaknesses\": \"The most obvious weakness of this paper is the lack of solid evidence of motivation. While the paper stated that negative samples randomly generated may be unrelated to error, there is no solid evidence on how it impacts the model's math ability. The research gap has yet to be explicitly visible so far. The authors should provide sufficient insight into how those randomly sampled negatives hurt the performance with solid evidence. I am not very convinced this issue is significant. Considering the limited performance improvement on large models (70B), I am concerned if this is indeed a problem of existing works.\\n\\nMost of the approaches used in this paper are existing training methods. The novelty of this paper is weak, given it is simply proposing a negative sample generation approach. The majority of the training descriptions are optional, given they are well-known. E.g. DPO, step-wise DPO, or NLL. The authors give good justification for why using prompts to generate the wrong sample is an impactful contribution. \\n\\nThe last point of concern is the performance improvement of large models. The proposed method does not seem to improve the performance of the large models much. Why is that? It seems not an improvement space problem since the large LLMs still suffer from math problems with many errors. However, even after this fine-tuning, the proposed solution fails to take care of them. Any insight? Is it indicating the proposed solution is going in the wrong direction for solving this problem?\", \"questions\": \"1. Why does this solution fail to improve the math of large models compared to the models fine-tuned on random negative samples? If someone uses a very simple negative sampling approach by generating random numerical values, is there any performance difference from the proposed solution?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"3. Scalability of the Method & Applicability to Other Domains\", \"comment\": \"We apply our RISE method to code generation, where avoiding subtle errors is critical. Following [1], we adopt the LeetCode dataset [2] to conduct training. The dataset includes around 2K leetcode tasks in the medium and hard levels. For the Qwen2-7B-Instruct model, we sample 50 times and finally obtain 1473 pairs of chosen and rejected samples for training. The preliminary results are shown as follows:\\n\\n| Method | MBPP | Humaneval |\\n| :-----| :----: | :----: |\\n| Qwen2-7B-Instruct | 42.2 | 43.9 |\\n| Qwen2-7B-DPO | 43.4 | 46.3 |\\n| Qwen2-7B-RISE | **44.2** | **47.6** |\\n\\nWe can observe that our RISE performs better than the general DPO method, achieving a 0.8% improvement on the MBPP test set and a 1.3% improvement on the Humaneval test set. Considering that the current solution editing strategy has not yet been adjusted to account for the characteristics of code generation, there is still certain room for improvement in the results.\\n\\n[1] Xu, Bin, Yiguan Lin, and Yinghao Li. \\\"SRA-MCTS: Self-driven Reasoning Aurmentation with Monte Carlo Tree Search for Enhanced Code Generation.\\\" arXiv preprint arXiv:2411.11053 (2024).\\n\\n[2] https://huggingface.co/datasets/greengerong/leetcode\"}",
"{\"title\": \"Response to Official Review by Reviewer r3F2 (1/3)\", \"comment\": \"Thanks a lot for your thoughtful and positive review. We appreciate your suggestions and would like to address the concerns you raised:\\n\\n**1. Reliance on Prompt Engineering**\\n\\nWe acknowledge that prompt engineering helps guide the generation of specific types of incorrect answers. However, it is important to note that the design of prompts can be flexible and adaptable. To reduce reliance on manual prompt engineering and demonstrate the flexibility of prompts used in RISE, we use the self-instruct method to generate a variety of prompt templates (10 templates for each type of error) and conduct self-editing with a random choice of the generated prompts. Some examples of prompt templates are shown as follows:\\n\\n**REPLACE a numerical value:**\\n\\n(1) Change a number in this step so that the calculation becomes incorrect, without indicating that a mistake has been introduced.\\n\\n(2) Alter the numerical value in this stage to produce an incorrect result, but avoid mentioning the error.\\n\\n(3) Modify a number in the current calculation to lead to a wrong outcome, without revealing the inaccuracy.\\n\\n(4) Adjust one of the values in this step to ensure the calculation is wrong, without pointing out the error.\\n\\n(5) Replace a number in the calculation with an incorrect one, but do not mention that anything is wrong.\\n\\n(6) Change a figure at this point to cause an erroneous result, without disclosing that you've made a mistake.\\n\\n(7) Introduce a wrong number in this calculation step, but refrain from stating that an error has occurred.\\n\\n(8) Modify a numerical value here so that the result is incorrect, without drawing attention to the mistake.\\n\\n(9) Adjust the number in this step to generate an inaccurate result, without acknowledging the error.\\n\\n(10) Introduce an incorrect value in this calculation, but avoid mentioning that the outcome is wrong.\\n\\n**SWAP two calculation terms:**\\n\\n(1) Switch the positions of two terms in the current calculation step to lead to an incorrect result, without explicitly acknowledging the mistake.\\n\\n(2) Rearrange two terms in the present step in a way that causes an error, but avoid mentioning that a mistake has occurred.\\n\\n(3) Alter the order of two terms in the current calculation to produce an incorrect outcome, without pointing out the error.\\n\\n(4) Exchange the positions of two terms in this step to intentionally create a miscalculation, and don't indicate that anything is wrong.\\n\\n(5) Adjust the placement of two terms in the ongoing calculation to introduce an error, without drawing attention to the fact.\\n\\n(6) Swap the order of two terms in the current process to result in a wrong answer, but refrain from noting the mistake.\\n\\n(7) Change the arrangement of two terms in the current step in a way that leads to an incorrect result, without signaling any error.\\n\\n(8) Interchange two terms in the current calculation step to produce a mistake, while keeping the error implicit.\\n\\n(9) Shift the positions of two terms in the calculation to create a wrong result, without stating that something is incorrect.\\n\\n(10) Modify the sequence of two terms in this step, causing an incorrect calculation, but don't mention the flaw.\", \"preliminary_results_on_qwen2_7b_instruct_with_the_above_self_edited_samples_are_shown_as_follows\": \"| Method | GSM8K | MATH |\\n| :-----| :----: | :----: |\\n| RISE-prompt-manual-error | 88.40 | 59.90 |\\n| RISE-prompt-self-instruct-error | 88.55 | 59.32 |\\n\\nWith a random selection of prompt templates, our RISE can still help improve mathematical reasoning capability and outperform the general DPO method. Compared with the results of the manual prompts used in our paper, the results of self-instruct prompts show a better accuracy on GSM8K but a slightly worse accuracy on MATH.\"}",
"{\"title\": \"Response to Official Review by Reviewer nbYg (2/3)\", \"comment\": \"**2. Novelty of the Approach**\\n\\nWe understand the concern regarding the novelty of the approach. We design a simple and effective framework to improve the LLM capability to avoid subtle errors. The novelty of the approach lies more in the use of **LLM itself** to generate **subtle, domain-specific incorrect solutions** that are crucial and effective for improving model robustness. Instead of sampling preference pairs, we utilize editing approaches to generate rejected samples, which is more efficient for models to learn to avoid specific subtle errors. Moreover, subtle errors are an important but often overlooked issue.\\n\\nIn addition, constructing more effective preference data is becoming increasingly important for preference learning, especially as human preferences may fail in more complex scenarios. Our method uses the model itself to generate fine-grained preference pairs and focus on subtle but essential errors. What sets this work apart is the focus on **leveraging the model\\u2019s own understanding** to make realistic errors, which helps it learn to avoid similar mistakes. Thus, our method has the potential to enable models to **self-improve**.\"}",
"{\"title\": \"Request for Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer nbYg,\\n\\\\\\n\\\\\\nWith only a few hours remaining in the rebuttal period, we kindly remind you to provide your feedback at your earliest convenience. We have carefully addressed your concerns and added new experiments to strengthen the submission. We sincerely hope you will consider these improvements when revisiting the paper and its score.\\n\\nThank you so much for your time and consideration!\\n\\\\\\n\\\\\\nAuthors of Paper #10957\"}",
"{\"summary\": \"This paper proposes a preference learning framework called RISE for DPO training. The key idea is to inject carefully created noise in the correct-incorrect answer pairs to guide the model which mistakes to avoid.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper studies how to optimize preference learning, a timely and important topic. The proposed method RISE is easy to understand, and showcases empirical performance gains on two commonly used open-source models, qwen and llama.\", \"weaknesses\": \"My major concern is the scope of the performance evaluation. Across all experiments in this paper, the models are trained on a single dataset extracted from Lai et al 2024. It is not clear if the results are cherry-picked or not. For example, how about other DPO training datasets for Math and other datasets? In addition, it is not clear how RISE affects the models' quality on other tasks. The authors classify the 6 evaluation datasets used in this paper as either \\\"in-distribution\\\" or \\\"out-of-distribution\\\", but they are all indeed math-related questions. It would be more desired to see evaluation on real out-of-distribution datasets, such as physics or logic heavy datasets (e.g., ZebraLogic or ARC corpus).\", \"questions\": \"See my weakness comment.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Concerns Addressed During Rebuttal\", \"comment\": \"In response to the reviewers' suggestions and concerns regarding our work, we have explained in detail and updated the manuscript in the following aspects:\\n1. **Scope of the performance evaluation** (``Reviewer CRH2``). We have conducted two additional experiments, (1) Training on another mathematical dataset (Appendix D) and (2) Out-of-domain evaluations without further training (Section 3.5). These experiments demonstrate that RISE **performs robustly** with different training datasets, and even achieves **strong transferability of reasoning ability across diverse domains**.\\n2. **Adaptation to other out-of-domain tasks** (``Reviewer kiSm``, ``Reviewer r3F2``). We have conducted experiments on code generation (Appendix G). The results indicate an efficient and effective adaptation to other complex reasoning tasks.\\n3. **Error Types** (``Reviewer kiSm``). We have conducted experiments with arbitrary error injection and different error combinations (Appendix F). The results illustrate the **scalability** of our framework.\\n4. **Validation on more models** (``Reviewer kiSm``). We have conducted experiments on Qwen2.5-7B-Instruct and Ministral-8B-Instruct (Appendix C). The results further demonstrate the **broader applicability** of our framework.\\n5. **Number of self-edited pairs** (``Reviewer kiSm``). Introducing too many self-edited pairs may lead to overfitting on certain patterns since self-edited pairs from one problem share a large portion of context.\\n6. **Hyperparameter $\\\\alpha$** (``Reviewer kiSm``). We have explored the impact of the hyperparameter $\\\\alpha$ in Appendix E.\\n7. **Performance on larger models** (``Reviewer nbYg``). We have conducted comprehensive experiments on Llama-3.1-70B-Instruct. The results suggest RISE\\u2019s robustness and effectiveness for improving reasoning in larger language models. Moreover, it performs better than the original DPO method.\\n8. **Prompt design** (``Reviewer r3F2``). We have conducted experiments with multiple prompt designs (Appendix F), (1) prompt template for introducing arbitrary errors and (2) self-instruct prompt template. The results indicate that our framework **performs robustly** and is not affected by variations in prompt design.\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewers,\\n\\\\\\n\\\\\\nWe sincerely thank you for thoroughly assessing our work and providing us with valuable and constructive feedback.\\n\\nOver the past few days, we have worked diligently to address your feedback and questions via additional experiments and detailed explanations. If you have any further queries or require clarifications, we would be happy to continue the discussion. We would also greatly appreciate it if you could reconsider the rating of our work based on the responses and updates we have provided during the rebuttal period.\\n\\nOnce again, we greatly appreciate your valuable contributions to improving our work.\\n\\\\\\n\\\\\\nSincerely,\\n\\nAuthors of Paper #10957\"}",
"{\"comment\": \"Thank you for your efforts in preparing the rebuttal. I have reviewed your response and decided to maintain my current score for two main reasons: (1) concerns about the novelty of the work and (2) the performance limitations with the larger model, which reduce the practical utility of the study. While the authors have provided a thorough and well-prepared rebuttal, the limitations of the work remains (nothing about the clarity of presentation). That said, I would not oppose acceptance if other reviewers advocate for it.\"}",
"{\"title\": \"General Response\", \"comment\": \"We sincerely thank all the reviewers for their insightful and constructive feedback on our manuscript. We have carefully addressed each of the individual comments in the reviews and believe we have successfully responded to most of the concerns raised. Additionally, we have incorporated the suggested experiments, along with their discussions and results, in the revised manuscript.\\n\\nBelow, we provide a brief summary of the updates made in the revision, including: **(1) Core Contributions**, **(2) Strengths**, **(3) Updates During Rebuttal**\\n\\n---\\n\\n### **Core Contributions**\\n\\n1. **Novel Framework**. We propose a novel preference learning framework that leverages the LLM itself to **inject errors** into correct solutions, constructing fine-grained hard pairs designed to **mitigate subtle errors**.\\n2. **Empirical Analysis**. Our study **identifies common error types** in mathematical reasoning and reveals the potential to improve the stability of the reasoning process by mitigating subtle errors.\\n3. **Experimental Results**. Through extensive experiments across various models (**Qwen2**, **Qwen2.5**, **Llama-3.1**, and **Ministral**) and tasks (**mathematical reasoning**, **logical reasoning**, and **code generation**), we demonstrate that RISE improves the reasoning capabilities of LLMs, helping LLMs mitigate subtle errors and consistently generate correct solutions.\\n4. **Transferability of Reasoning Ability across Diverse Domains.** Additional evaluation experiments on logical reasoning (ZebraLogic) and code generation (MBPP and Humaneval) show that our method can effectively generalize reasoning preferences learned from mathematical tasks to other complex reasoning domains even **without further in-domain training**.\\n5. **Flexible Adaptation to New Reasoning Tasks**. Our RISE framework has demonstrated its effectiveness in optimizing code generation, with **minimal modifications to editing prompts**. It shows that RISE can be flexibly and conveniently adapted to new tasks.\\n\\n### **Strengths**\\n\\n1. **Novelty of Method**. ``Reviewer kiSm`` and ``Reviewer r3F2`` agreed that our framework RISE is a novel and effective approach for preference learning.\\n2. **Significance**. ``Reviewer kiSm`` and ``Reviewer r3F2`` affirm that our study makes a contribution to the field of LLMs by focusing on subtle errors and improving mathematical reasoning ability.\\n3. **Writing and Presentation**. ``Reviewer CRH2`` and ``Reviewer nbYg`` praised the clarity and readability of our writing and presentation.\\n4. **Simplicity and Effectiveness**. ``Reviewer r3F2`` recognizes that our framework RISE is simple and effective.\\n5. **Self-improvement**. ``Reviewer r3F2`` approves that letting LLMs identify likely mistakes and optimizing based on error-injected self-edited pairs can lead to a self-improvement mechanism.\\n\\n### **Updates During Rebuttal**\\n\\n1. **Section 3.5**: Add evaluation results and analysis on out-of-domain reasoning tasks (logical reasoning and code generation).\\n2. **Appendix C**: Add validation experiments on more open-source models (Ministral-8B-Instruct and Qwen2.5-7B-Instruct).\\n3. **Appendix D**: Add validation experiments on another mathematical training dataset, including 15K problems from the original GSM8K and MATH training datasets.\\n4. **Appendix E**: Add exploration experiments of the hyperparameter $\\\\alpha$.\\n5. **Appendix F**: Add exploration experiments of prompt designs, including self-instruct prompts and prompts that introduce arbitrary errors without specifying a particular mistake.\\n6. **Appendix G**: Add validation experiments on other reasoning tasks, such as code generation.\\n\\nWe believe these additions and clarifications comprehensively address the reviewers' concerns and enhance the overall quality of our manuscript.\"}",
"{\"title\": \"Response to Official Review by Reviewer nbYg (3/3)\", \"comment\": \"**3. Performance Improvement on Large Models**\\n\\nWe appreciate the reviewer\\u2019s observation. The reduced performance gains on large models is indeed an interesting result. Our hypothesis is that larger models, while still prone to errors, may have already learned more sophisticated representations of mathematical reasoning during pre-training. As a result, the room for improvement through preference learning is smaller compared to smaller models. Similar phenomenon can be found in DART-math [1], where fine-tuning the Llama3-70B model achieves a small accuracy gain on most in-domain and out-of-domain mathematical evaluation datasets. Some metrics even decrease after further fine-tuning. \\n\\nIn our experiments, we adopt mathematical problems in commonly recognized and used datasets such as MetaMATH and AQuA and finally collect 4K pairs for preference learning of large models, corresponding to only 2K problems. Even if the sampling attempt is increased, the number of effective pairs that can be collected does not increase significantly. This number of problems may be inadequate to help large models further improve a lot.\\n\\n[1] Tong, Yuxuan, et al. \\\"Dart-math: Difficulty-aware rejection tuning for mathematical problem-solving.\\\" arXiv preprint arXiv:2407.13690 (2024).\"}",
"{\"title\": \"Request for Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer r3F2,\\n\\\\\\n\\\\\\nThe rebuttal period is ending in just a few hours, and we kindly remind you to provide your feedback. We have thoroughly addressed your concerns and conducted additional experiments to strengthen our submission. We sincerely hope you will consider these improvements when reassessing the paper and its score.\\n\\nThank you very much for your time and effort!\\n\\\\\\n\\\\\\nAuthors of Paper #10957\"}",
"{\"title\": \"Request for Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer kiSm,\\n\\\\\\n\\\\\\nWe hope this email finds you well. Thank you again for your detailed review and valuable feedback on our submission.\\n\\nSince the rebuttal period is nearing its conclusion, we wanted to kindly follow up to request your feedback on our responses. We have worked diligently to address your concerns through additional experiments and detailed clarifications, and we would greatly appreciate hearing your thoughts on whether our rebuttal has resolved your questions.\\n\\nIf there are any remaining issues or points requiring further clarification, we would be happy to address them before the rebuttal period ends. Thank you once again for your time and effort in reviewing our work.\\n\\\\\\n\\\\\\nSincerely,\\n\\nAuthors of Paper #10957\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Comments on performance with larger models\", \"comment\": \"Dear Reviewer nbYg,\\n\\\\\\n\\\\\\nWe have re-implemented experiments on Llama-3.1-70B-Instruct, increasing the number of sampling attempts to obtain more preference sample pairs for training. By conducting a direct comparison between our proposed method RISE and the baseline DPO, we demonstrate the clear advantages of RISE in multiple math test datasets. The preliminary results are summarized in the table below:\\n\\n| Method | GSM8K | MATH | AQuA |SVAMP | AIME24 | Odyssey-MATH |\\n| :-----| :----: | :----: | :----: | :----: | :----: | :----: |\\n| Llama-3.1-70B-Instruct | 94.9 | 65.0|77.1|93.0|7/30|**60.4**|\\n| DPO | 94.5|65.5|77.1|93.1|7/30|58.4|\\n| RISE | **95.2**|**66.4**|**78.7**|**93.5**|7/30|60.0|\\n\\nOn MATH, RISE achieves a score of 66.4, compared to 65.5 for DPO and 65.0 for the base model. Similarly, on AQuA, RISE scores 78.7, outperforming both DPO (77.1) and the base model (77.1). These results highlight RISE's ability to address complex mathematical and logical reasoning tasks with greater accuracy and consistency. \\n\\nOn GSM8K, RISE achieves the highest score of 95.2, surpassing DPO (94.5) and the base model (94.9). For SVAMP, RISE improves slightly over DPO (93.5 vs. 93.1) and the base model (93.0), showcasing its reliability in solving various types of problems.\\n\\nFor the more challenging datasets, Odyssey-MATH, RISE remains competitive. RISE can maintain reasoning capability for complex tasks while DPO slightly hurts performance. These findings underscore RISE\\u2019s robustness and effectiveness as a method for improving reasoning in larger language models. **Moreover, it performs better than the original DPO method.**\\n\\\\\\n\\\\\\nAuthors of Paper #10957\"}"
]
} |
EHfn5fbFHw | Context-Augmented Code Generation Using Programming Knowledge Graphs | [
"Iman Saberi",
"Fatemeh Fard"
] | Large Language Models (LLMs) and Code-LLMs (CLLMs) have significantly improved code generation, but, they frequently face difficulties when dealing with challenging and complex problems. Retrieval-Augmented Generation (RAG) addresses this issue by retrieving and integrating external knowledge at the inference time. However, retrieval models often fail to find most relevant context, and generation models, with limited context capacity, can hallucinate when given irrelevant data. We present a novel framework that leverages a Programming Knowledge Graph (PKG) to semantically represent and retrieve code. This approach enables fine-grained code retrieval by focusing on the most relevant segments while reducing irrelevant context through a tree-pruning technique. PKG is coupled with a re-ranking mechanism to reduce even more hallucinations by selectively integrating non-RAG solutions. We propose two retrieval approaches—block-wise and function- wise—based on the PKG, optimizing context granularity. Evaluations on the HumanEval and MBPP benchmarks show our method improves pass@1 accuracy by up to 20\%, and outperforms state-of-the-art models by up to 34\% on MBPP. Our contributions include PKG-based retrieval, tree pruning to enhance retrieval precision, a re-ranking method for robust solution selection and a Fill-in-the- Middle (FIM) enhancer module for automatic code augmentation with relevant comments and docstrings. | [
"Code Generation",
"RAG",
"Knowledge Graphs",
"Large Language Models"
] | Reject | https://openreview.net/pdf?id=EHfn5fbFHw | https://openreview.net/forum?id=EHfn5fbFHw | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xlcaAkoFsg",
"xjxsRhCRmE",
"x3MPGQ6M9I",
"qyykxDIYsl",
"osgnYc6JcY",
"o2RgnAMQfy",
"mSBuc4OBKi",
"grkLnPAAJ5",
"fWAYFjsZtk",
"eqyi3cfBnP",
"a5xeU7xaQd",
"YggtdZnSUj",
"SnLkZzUNM9",
"S9ReHRT9ep",
"REKBbhHHb8",
"PSvqxzvMyG",
"P07R3r573m",
"OoT7FoOqYU",
"JuGinHFGl4",
"IwvqY3KlGp",
"EeaVmJeov1",
"AW8b7AR5Rz",
"6wTxyNxfk5",
"5IEMRexuVY",
"24suIVrwdI"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732312686735,
1737523607435,
1732312929860,
1732563047237,
1731965980240,
1732312866276,
1731948786789,
1733159086782,
1733175212811,
1731660588415,
1732563162017,
1734752633509,
1732379139240,
1730742366530,
1731662113703,
1733178323852,
1732312459019,
1730721538921,
1730714522748,
1731663933246,
1732822698064,
1730720138063,
1731659993889,
1733175247252,
1733216102430
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Reviewer_eMrs"
],
[
"ICLR.cc/2025/Conference/Submission3926/Reviewer_eMrs"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Reviewer_4mym"
],
[
"ICLR.cc/2025/Conference/Submission3926/Area_Chair_v3qH"
],
[
"ICLR.cc/2025/Conference/Submission3926/Reviewer_eMrs"
],
[
"ICLR.cc/2025/Conference/Submission3926/Reviewer_4mym"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Reviewer_eMrs"
],
[
"ICLR.cc/2025/Conference/Submission3926/Reviewer_uce7"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Reviewer_CYoW"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3926/Reviewer_uce7"
]
],
"structured_content_str": [
"{\"title\": \"New version is uploaded\", \"comment\": \"# Dear Reviewer,\\n\\n## The new version of our paper is uploaded. Here is the change log:\\n**Styling Updates:**\\nWe have updated the styling of the tables to enhance their visual appeal and clarity. Specifically, the reversed colons issue has been corrected, and the figure illustrating the retrieval process has been revised in accordance with Reviewer 1's suggestions.\\n\\n**Cost Trade-Off Analysis:**\\nA new section discussing the cost trade-offs associated with the proposed method has been added before the conclusion (Section 5.1). This addition addresses the concerns raised about performance improvements vs computational and resource costs.\\n\\n**Challenges in Retrieving Information from PKG:**\\nA detailed discussion on the challenges of retrieving information from the Programming Knowledge Graph (PKG) has been included in the appendix as Section 7.1. This addition highlights the complexities and practical considerations of implementing the PKG-based retrieval system.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"New version is uploaded\", \"comment\": \"# Dear Reviewer,\\nWe have addressed your comments regarding the cost trade-off in Section 5.1 and the challenges in PKG retrieval in Section 7.1 of the Appendix.\\n\\n## The new version of our paper is uploaded. Here is the change log:\\n**Styling Updates:**\\nWe have updated the styling of the tables to enhance their visual appeal and clarity. Specifically, the reversed colons issue has been corrected, and the figure illustrating the retrieval process has been revised in accordance with Reviewer 1's suggestions.\\n\\n**Cost Trade-Off Analysis:**\\nA new section discussing the cost trade-offs associated with the proposed method has been added before the conclusion (Section 5.1). This addition addresses the concerns raised about performance improvements vs computational and resource costs.\\n\\n**Challenges in Retrieving Information from PKG:**\\nA detailed discussion on the challenges of retrieving information from the Programming Knowledge Graph (PKG) has been included in the appendix as Section 7.1. This addition highlights the complexities and practical considerations of implementing the PKG-based retrieval system.\"}",
"{\"title\": \"Thanks for your reply.\", \"comment\": \"We appreciate your valuable feedback and understand your concern regarding retrieval from different resources. Currently, due to GPU limitations, we have not yet been able to provide results addressing this aspect, but we are actively working on it.\\nTo ensure we address your concern effectively, we kindly seek clarification on which of the following experiments aligns best with your expectations:\\n1. Incorporating documentation into our PKG using a hierarchical document retriever and integrating it into the graph structure.\\n2. We have the RAG performance by augmenting the model with canonical solutions. To explore the effect of data type, we can transform these canonical solutions (code content) into textual explanations. By augmenting the questions with these solution explanations, we can compare the results of the two data types to evaluate their effectiveness.\\nYour guidance will help us prioritize the most relevant approach for our revisions.\"}",
"{\"title\": \"Thanks for your reply.\", \"comment\": \"**1.Then how about other retrieval methods? Like sparse BM25 or dense VoyageEmb. I would assume that they can retrieve some good information for the mixture of natural language and code, e.g., Python documents.**\\n\\nThe assumption that the other approaches can retrieve useful information from mixture of NL document resources is not correct, based on the results of Code-RAG-Bench. Referring to Code-RAG-Bench paper, the NoRAG accuracy of starcoder2 is 31.7 for HumanEval (table 6), however when they augmented it by Tutorials, the accuracy decreases to 27.4 and 29.3 for BM25 and OpenAI embeddings respectively (table 7). When they augmented problems by Docs the accuracy decreases to 29.3 and 24.4 for BM25 and OpenAI embeddings, respectively. When the prompt is augmented by GitHub, the accuracy is decreased to 30.5 and 31.1 for BM25 and OpenAI embeddings, respectively. Therefore, as shown in Code-RAG paper, BM25 and OpenAI embeddings lead to worse results compared to NoRAG, when we augment context with NL-contents. \\n\\nAs the authors of Code-RAG-Bench mentioned, \\u201cIn general most models can be easily distracted or disturbed by additional contexts [41], and fail to conduct the designated code generation task, indicating much room for improvement for RACG.\\u201d \\n\\nTherefore, more content is not necessarily helping the models for code generation, and we built our approach to help the models retrieve relevant contents using PKG; thus, providing some content that can be helpful. \\n\\n[41] Z. Wang, J. Araki, Z. Jiang, M. R. Parvez, and G. Neubig. Learning to filter context for \\nretrieval-augmented generation. \\n\\n**2.1 The \\\"StackOverflow\\\" in Code-RAG isn't code-centric. As per the original paper, it includes question descriptions, code responses, and textual explanations.**\\n\\nYou are correct that Stack Overflow includes a mix of natural language and code, featuring question descriptions, code responses, and textual explanations. While it may not be purely code-centric like GitHub repositories, its content is inherently tied to solving specific coding problems. This focus on problem-solving makes it a valuable resource for retrieval-augmented code generation, as the discussions often center around actual programming challenges and their solutions related to the asked question or coding problem to be solved. In contrast, tutorials and documentation generally aim to explain broader concepts or provide structured overviews of programming topics, making them less code-specific. Therefore, while Stack Overflow may not be purely code-centric, it provides practical, problem-oriented context that aligns more closely with the needs of code generation tasks compared to tutorials or documentation. As explained in previous responses as part of Code-RAG-Bench results, when Tutorials are added as context, the performance of retrieval algorithms drops. So, the NL should be related to the problem at hand, which is also the case in SO. In our approach, we also provide doc_string NL contents for each function, making the retriever take advantage from the description of code contents. \\n\\n**2.2 For the referenced paper Code-RAG [1], I didn't find that SO shows better results than tutorials or documentation. The scores vary from one dataset to another.**\\n\\nPlease take a look at Table 7 in the Code-RAG-Bench paper for augmenting with different data sources (the NoRAG accuracy of starcoder2 is provided in Table 6). \\n\\n**3 I believe the proposed approach is somehow useful in some settings. However, it seems to be limited and too specific, as it \\\"led to worse results than the No-RAG baseline.\\\" when Python documentation and natural language are included.**\\n\\nWe can apply PKG on any dataset containing code contents and in any programming language, so we do not believe it is too specific. Regarding leading to worse results than the No-RAG, it is not due to our approach. As the authors of Code-RAG-Bench mentioned \\u201cIn general most models can be easily distracted or disturbed by additional contexts [41], and fail to conduct the designated code generation task, indicating much room for improvement for RACG.\\u201d We saw the same behavior in Code-RAG-Bench paper, as explained in the first response above. \\n\\nTherefore, the more general documents such as Python documentations would not necessarily help the model to generate code, as shown in Code-RAG-Bench paper. However, the quality of the content to be retrieve is of high importance. As explained above, for example, in case of Code-RAG-Bench, retrieval methods have better results when SO is used as context compared to when Tutorial is used. In our work, we improve this aspect by providing a way to help retrieve the most relevant content for code generation though PKG. \\n\\n[41] Z. Wang, J. Araki, Z. Jiang, M. R. Parvez, and G. Neubig. Learning to filter context for \\nretrieval-augmented generation.\"}",
"{\"title\": \"New version is uploaded\", \"comment\": \"# Dear Reviewer,\\nWe have addressed your comments regarding the cost trade-off in Section 5.1 and the challenges in PKG retrieval in Section 7.1 of the Appendix.\\n\\n## The new version of our paper is uploaded. Here is the change log:\\n**Styling Updates:**\\nWe have updated the styling of the tables to enhance their visual appeal and clarity. Specifically, the reversed colons issue has been corrected, and the figure illustrating the retrieval process has been revised in accordance with Reviewer 1's suggestions.\\n\\n**Cost Trade-Off Analysis:**\\nA new section discussing the cost trade-offs associated with the proposed method has been added before the conclusion (Section 5.1). This addition addresses the concerns raised about performance improvements vs computational and resource costs.\\n\\n**Challenges in Retrieving Information from PKG:**\\nA detailed discussion on the challenges of retrieving information from the Programming Knowledge Graph (PKG) has been included in the appendix as Section 7.1. This addition highlights the complexities and practical considerations of implementing the PKG-based retrieval system.\"}",
"{\"title\": \"Thank you for the response\", \"comment\": \"Thanks for the reply. I still have concerns about the retrieval setting.\\n\\n**1. The proposed approach failed when the knowledge graph was constructed on Python documents and natural language explanations and QA pairs.**\\n\\nThen how about other retrieval methods? Like sparse BM25 or dense VoyageEmb. I would assume that they can retrieve some good information for the mixture of natural language and code, e.g., Python documents.\\n\\n**2. This observation aligns with findings from the Code-RAG benchmark paper [1], where the authors noted that Stack Overflow content, which is code-centric, yields better results than tutorial or documentation content when used as context in RAG settings.**\\n\\n1) First, the \\\"StackOverflow\\\" in Code-RAG is definitely not a code-centric resource. According to the original paper, it's curated from RedPajama-Data-1T StackExchange split, and \\\"has a question description, code responses, and textual explanations\\\". Also, I've just checked the data manually and It's not a code-centric resource but a mixture of code and natural language, which is to my understanding a more appropriate retrieval resource for code generation.\\n\\n2) For the referenced paper Code-RAG [1], I didn't find that SO shows better results than tutorials or documentation. The scores vary from one dataset to another.\\n\\n**3. For example, PKG can be applied on the code-base of a project or proprietary repositories, helping with the retrieval of similar code that is tailored towards a specific context.**\\n\\nI believe the proposed approach is somehow useful in some settings. However, it seems to be limited and too specific, as it \\\"led to worse results than the No-RAG baseline.\\\" when Python documentation and natural language are included.\\n\\n**4. Open retrieval**\\n\\nIt means the retrieval resources can include any kind of information. This could also be found in Code-RAG [1].\\n\\nHope to get some further clarification for the above concerns.\"}",
"{\"title\": \"Thanks for your rebuttal and updated results (3)\", \"comment\": \"Thanks for your updates. I'm glad that **some of my concerns have been demonstrated in the experiments**. I also appreciate that the authors show a willingness to further address them. However, I'm more convinced that the paper is not ready in its current form. As the rebuttal phrase is not to encourage major revision and experiments, I will keep my evaluation.\"}",
"{\"title\": \"New version is uploaded.\", \"comment\": \"Dear Reviewer,\\n\\nWe would like to kindly inform you that a revised version of the paper has been uploaded. In this updated version, we have extended our evaluation to include a new text-centric data source (Python tutorials). Our results demonstrate that the proposed approach effectively leverages text-centric data to retrieve more precise and relevant content, leading to measurable performance improvements in the code generation task. You can find the updated results in Section 7.2 of the Appendix.\"}",
"{\"title\": \"Addressing Retrieval Limitations, Dataset Structure, and Open-Domain Contexts in RAG Settings\", \"comment\": \"# Reviewer Concerns:\\n## 1. The setting is kind of weird to me: in real-world applications, code generation is usually augmented by code documents, natural language thoughts, or similar question-solution pairs, which is to say, natural language could be used to retrieve helpful information. Instead, only focusing on code representations may be limited:\\nThank you for your thoughtful observation. In our initial experiments, we tried augmenting the model with a knowledge graph that included Python documents and natural language explanations. However, this approach led to worse results than the No-RAG baseline. \\n\\nThis is because, when the goal is to generate accurate code, providing code-based context is more effective than natural language context. Natural language inputs often lead the model to focus on generating explanations or descriptions rather than precise code outputs. This observation aligns with findings from the Code-RAG benchmark paper [1], where the authors noted that Stack Overflow content, which is code-centric, yields better results than tutorial or documentation content when used as context in RAG settings. \\n\\nOur approach provides a technique to enable retrieving relevant code, which is suitable for real-world applications. For example, PKG can be applied on the code-base of a project or proprietary repositories, helping with the retrieval of similar code that is tailored towards a specific context. \\n\\n[1] Zora Zhiruo Wang, Akari Asai, Xinyan Velocity Yu, Frank F Xu, Yiqing Xie, Graham Neubig, \\nand Daniel Fried. Coderag-bench: Can retrieval augment code generation? \\n\\n## 2. Accordingly, in a not realistic setting, the experiments are not convincing enough to me. For example, on both HumanEval and MBPP, using BM25 for RAG consistently gets lower performance than no RAG. Also, with PKG the performance improves, it seems to be not a fair comparison:\\n\\nThank you for your attention to this detail. The reason BM25 and Voyage perform lower than No-RAG on both HumanEval and MBPP is due to how the data is structured. In these experiments, the dataset is composed of question-answer pairs. When we apply BM25 or VoyageEmb without any post-processing, the retrieved content includes question-answer pairs. Including these in the model\\u2019s context introduces additional questions and answers, which can confuse the model and lead to hallucinated outputs. \\nHowever, when we clean the dataset to contain only functions instead of full question-answer pairs (referred to as Func-BM25 in the tables 1 and 2), the performance improves. Func-BM25 outperforms No-RAG in HumanEval, indicating that retrieving only relevant function information is beneficial. \\n\\n## 3. What's the advantage of PKG to normal RAG in an open retrieval setting? \\n\\nThanks for your comment. We consider your concern as about retrieving from a dynamic data source such as google search. \\nThe advantage of PKG over normal RAG in an open retrieval setting is that it allows for more precise and relevant retrieval of information. \\nIn standard RAG, if we retrieve 100 answers from an open-domain source, we typically use a re-ranker to rank these answers and then select the top-n answers to add as an additional context for the model. In normal RAG we chunk data paragraph-wise or page-wise. Even though retriever tries to retrieve most similar chunks, but they might contain irrelevant data (e.g. irrelevant sentences). \\n\\nIn contrast, PKG organizes these 100 answers into a structured Programming Knowledge Graph, where only the most relevant nodes (i.e., the specific, necessary information) are retrieved. This approach minimizes irrelevant data and focuses on retrieving content in a fine-grained manner, even if it's within larger irrelevant sections. For example, consider a small code block that closely resembles the query but is contained within a function unrelated to the query. In standard RAG, this code block would not be retrieved because the retrieval process compares the embeddings of the query against the embeddings of the entire function, rather than focusing on the individual code block. As a result, the function\\u2019s broader context may obscure the relevance of the specific code block to the query. This way, PKG provides a more targeted, context-rich augmentation that helps the model perform better. \\n\\n ## 4. Is max_token=512 in the experiments enough? \\nYes, as the benchmarks contains general python programming and as the canonical solutions are less than 512 tokens, max_new_tokens=512 is enough for experiments. The authors of CodeT5+[1] also evaluated their model with the same max length. \\n\\n[1] Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi DQ Bui, Junnan Li, and Steven CH Hoi. \\nCodet5+: Open code large language models for code understanding and generation.\"}",
"{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Thanks for the answers! My questions are answered.\"}",
"{\"metareview\": \"This paper introduces a novel framework that utilizes a programming knowledge graph (PKG) to enhance code retrieval and generation by semantically representing code and employing a tree-pruning technique to minimize irrelevant context. This paper still has to be improved on aspects such as the limited focus on code without natural language context, questionable experimental settings, the resource demands of maintaining the Programming Knowledge Graph, and insufficient exploration of scenarios where the PKG may underperform, along with a lack of clarity on scalability and the re-ranking mechanism.\", \"additional_comments_on_reviewer_discussion\": \"The discussion is thorough. The authors submitted the new version and the reviewers provided feedbacks.\"}",
"{\"title\": \"Thanks for your rebuttal (2)\", \"comment\": \"Thanks for the revised paper. I still feel my concerns have not been adequately addressed. Take some of them listed below:\\n\\n**1. Thank you for recognizing that SO is actually not code-centric like GitHub repositories, which then fails to support your rebuttal to the first concern about the retrieval setting.**\\n\\n**Table 7 HumanEval**\\n| Method | Tutorial | Docs | SO | GitHub | All |\\n|-------------|----------|------|-------|--------|------|\\n| **BM25** | 27.4 | 29.3 | 32.9 | 30.5 | 97.6 |\\n| **GIST-large** | 34.8 | 26.7 | 32.3 | 32.9 | 69.1 |\\n| **OpenAI** | 29.3 | 24.4 | 36.0 | 31.1 | 97.6 |\\n\\n**Table 8 ODEX**\\n| Method | Tutorial | Docs | SO | GitHub | All |\\n|-------------|----------|-------|-------|--------|------|\\n| **BM25** | 13.4 | 14.1 | 11.6 | 15.9 | 16.2 |\\n| **GIST-large** | 15.7 | 17.3 | 11.4 | 15.5 | 17.1 |\\n| **OpenAI** | 14.1 | 15.9 | 10.9 | 16.9 | 15.3 |\\n\\n**Table 9 RepoEval**\\n| Method | Tutorial | Docs | SO | GitHub | Open | L+O |\\n|-------------|----------|-------|-------|--------|-------|------|\\n| **BM25** | 25.2 | 23.9 | 23.6 | 25.5 | 23.6 | 31.4 |\\n| **GIST-large** | 23.3 | 21.7 | 24.7 | 24.4 | 24.1 | 41.8 |\\n| **OpenAI** | 24.1 | 24.1 | 23.1 | 22.8 | 24.9 | 50.9 |\\n\\n**2. The authors insisted in their two responses that SO (StackOverflow) yielded better scores than other resources. I'd like to refer to the original tables from Code-RAG-Bench above.**\\n\\nFrom the results, we can find that SO shows the worst performance on ODEX, the average performance on RepoEval, and the best performance on HumanEval. However, even on HumanEval, SO gets a lower score than Tutorial and GitHub. Obviously, it's not correct to claim that code-centric content is better than other resources. \\n\\nFurthermore, the authors misinterpreted my concerns as more context, which is not relevant to the retrieval setting concern.\\n\\n**3. The authors kept referring to other papers (e.g., Code-RAG-Bench) and haven't yet provided some additional experimental results.**\\n\\nFor example, as the authors replied, \\\"In our initial experiments, we tried augmenting the model with a knowledge graph that included Python documents and natural language explanations. However, this approach led to worse results than the No-RAG baseline.\\\" I explicitly expressed my concerns about the performance of other retrieval methods in my reply.\\n\\nThrough the discussions, I have a better understanding of the paper, and the authors haven't addressed my concerns. Therefore, I will decrease my rating accordingly.\"}",
"{\"summary\": [\"Problem statement:\", \"Models fail on complex code generation tasks\", \"RAG improves with external knowledge, but fails on relevant context\", \"Irrelevant information causes hallucinations\"], \"solution\": [\"PKG: semantically represent and retrieve code\", \"Provides fine-grained code retrieval by focusing on the most relevant segments while reducing irrelevant context through a tree-pruning technique.\", \"PKG is coupled with a re-ranking mechanism to reduce even more hallucinations by selectively integrating non-RAG solutions.\", \"2 retrieval approaches\\u2014block-wise and function-wise\\u2014based on the PKG, optimizing context granularity.\"], \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"PKG: semantically represent and retrieve code and provides fine-grained code retrieval by focusing on the most relevant segments while reducing irrelevant context through a tree-pruning technique.\", \"The results indicate the benefits of using PKG.\", \"The experiments are thorough, and the paper is well written.\"], \"weaknesses\": [\"There still exists a gap between ideal ranker & re-ranker, what was the reason for it.\", \"How reliable is the result? Based on multiple generations with low temperatures\", \"Samples in MBPP & HumanEval present with misaligned or incorrect NL, under which error is that catered to?\", \"When block-PKG performs better than func-PKG then why is ranking done for considering func-PKG?\", \"Need to show cases of extraction with PKG providing better function than current RAG.\"], \"questions\": [\"Line 097 paragraph can be converted to bullets for better visual capture\", \"Line 146 is while added to list of blocks\", \"Correct invert commas in line 218, 221 ...\", \"Section 2.2 RETRIEVAL FROM PKG (steps) should be formatted same as section 2.1\", \"Fig 3 step 2: shouldnt the similarity be between encoded query and function nodes, rather than function nodes as shown in diagram\", \"Appendix experiment that can be added: [optional]\", \"Compare performance when an empty node is taken as a substitute for non-RAG option of input augmentation\", \"Table 2 formatting needs to be corrected for column value alignment\", \"Also format the table borders for Table 1 and 2\", \"For all figures, increase font size, not readable\", \"Table 3 visual impact can be improved by color incorporation to display increase or decrease of error with PKG incorporation\", \"When block-PKG performs better than func-PKG then why is ranking done for considering func-PKG?\", \"Which are cases where func gives better results than block-PKG and why?\", \"Line 1029 mention using block PKG\", \"Line 1012 also provide example for with RAG what result came\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Evaluating the Resource Trade-offs, Language Generalizability, and Performance Enhancements of PKG\", \"comment\": \"# Reviewer Concerns:\\n## 1.Building and maintaining the Programming Knowledge Graph may be resource-intensive and require domain expertise.\\n\\nThank you for your insightful observation. We will add the cost trade-off when we use PKG compared to current RAG approaches. We will explain these details in the paper by the Nov 27th in a new section in appendix (cost trade-off section):\", \"we_can_consider_two_versions_of_pkgs_for_discussion\": \"### **PKG with enhancer module, which aims to enhance the graph with doc-string data**:\", \"step1\": \"downloading dataset (143,000 QA, 280MB): negligible, a few seconds.\", \"step2\": \"index dataset: 44 minutes.\\n\\nIn general for 143,000 Q&A:\", \"step3\": \"Code block extraction: 25 minutes.\", \"step4\": \"enhanced PKG with doc_strings using LLM on one A100: 82 hours.\", \"step5\": \"encode PKG using voyage embeddings using API calls: 4 hours.\", \"step6\": \"generating neo4j graph: 33 minutes.\\n\\nIn general for 143,000 Q&A (~500,000 nodes will be added in graph):\", \"it_took_around\": \"***44 minutes.***\", \"total_storage\": \"***315mb*** (index and text data)\\n\\n\\n Based on the comparisons and the results presented in Tables 1 and 2, we can conclude that removing the func-block does not significantly impact performance. In comparison to existing retrieval-augmented generation (RAG) methods, which primarily rely on embedding approaches (such as Voyage Embeddings), our approach takes an additional hour to process the selected dataset. However, this extra time results in a significant performance improvement over the standard embedding-based RAG methods.\\n\\n## 2. The framework\\u2019s effectiveness may be constrained to specific programming languages (e.g. Python) of code tasks \\n\\nThank you for your insightful comment. Our approach works with any programming language that supports AST extraction. For languages without a native AST library, third-party tools like tree-sitter, which supports over 25 languages, can be used. PKG can be built on any language with structural code blocks, treating each block as the smallest semantic node and retrieving the most similar nodes during inference.\\n\\n## 3. The paper could benefit from a deeper exploration where PKG and retrieval mechanisms fail to improve or potentially hinder code generation quality. \\n\\nThank you for the feedback. If the model requires domain expertise, the PKG should reflect that domain. For example, if the model targets a specific framework, the dataset must include it, or for project-specific code, the PKG should contain project data. Failures occur when querying a graph lacking domain knowledge. We will elaborate on this in the paper by Nov 27th.\\n\\n## 4. What is the computational cost of building and updating the PKG, and how frequently does it need maintenance to remain effective? \\n\\nWe have provided the computational costs regarding the building PKG in the **first concern**. \\n\\n**graph updates:** \\n\\nNeo4j's semantic vector indexing ensures efficient graph updates, with O(logN) complexity for adding nodes and O(logM) for relationships, where \\ud835\\udc41 and \\ud835\\udc40 are the total nodes and relationships. This logarithmic growth ensures scalability.\\n\\n**maintenance:**\\nAs long as the existing content in the graph is not deprecated in the target programming language, we could benefit from that knowledge graph. \\n\\n## 5. How does the additional complexity introduced by tree pruning and re-ranking affect code generation performance, and is this overhead manageable? \\n\\nThanks for pointing this out! We conducted tree pruning and re-ranking in a Neo4j environment using Cypher queries and the Graph Data Science plugin. With a graph of around 500,000 nodes, each query took under 5 seconds on an M1 chip. Despite the added complexity, the performance improvement was significant and manageable.\"}",
"{\"title\": \"Thanks for your reply.\", \"comment\": \"We sincerely appreciate the time and effort you dedicated to reviewing our paper and the thoughtful concerns you raised in your comments. Your feedback was valuable in guiding the improvement of our work. We would greatly appreciate it if you could provide additional insights on the specific experiments or areas of investigation you believe should be included in future iterations of our research.\"}",
"{\"title\": \"New version is uploaded.\", \"comment\": \"# Dear Reviewer,\\nWe have addressed the styling issues you have mentioned.\\n\\n## The new version of our paper is uploaded. Here is the change log:\\n**Styling Updates:**\\nWe have updated the styling of the tables to enhance their visual appeal and clarity. Specifically, the reversed colons issue has been corrected, and the figure illustrating the retrieval process has been revised in accordance with Reviewer 1's suggestions.\\n\\n**Cost Trade-Off Analysis:**\\nA new section discussing the cost trade-offs associated with the proposed method has been added before the conclusion (Section 5.1). This addition addresses the concerns raised about performance improvements vs computational and resource costs.\\n\\n**Challenges in Retrieving Information from PKG:**\\nA detailed discussion on the challenges of retrieving information from the Programming Knowledge Graph (PKG) has been included in the appendix as Section 7.1. This addition highlights the complexities and practical considerations of implementing the PKG-based retrieval system.\"}",
"{\"summary\": \"The paper proposes to learn a programming knowledge graph from a predefined code QA dataset to augment code generation. The paper aims to retrieve proper code segments semantically and thus improve the performance.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The idea and the presentation are clear.\", \"The method is demonstrated to be useful in the given setting, i.e., retrieve code segments from a given code QA dataset. It's a somewhat novel idea to learn code representation in a knowledge graph.\"], \"weaknesses\": [\"The setting is kind of weird to me: in real-world applications, code generation is usually augmented by code documents, natural language thoughts, or similar question-solution pairs, which is to say, natural language could be used to retrieve helpful information. Instead, only focusing on code representations may be limited.\", \"Accordingly, in a not realistic setting, the experiments are not convincing enough to me. For example, on both HumanEval and MBPP, using BM25 for RAG consistently gets lower performance than no RAG. Also, with PKG the performance improves, it seems to be not a fair comparison.\"], \"questions\": [\"What's the advantage of PKG to normal RAG in an open retrieval setting?\", \"Is `max_token=512` in the experiments enough?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents a novel framework for enhancing code generation by integrating a Programming Knowledge Graph (PKG) with existing language models. By structuring code as a graph that captures hierarchical and semantic relationships, the framework supports granular retrieval, improving contextual relevance and reducing the inclusion of irrelevant information. The approach employs block-level and function-level retrieval strategies, coupled with a re-ranking mechanism, to increase generation accuracy and mitigate hallucinations induced by irrelevant context. Extensive evaluations on widely recognized benchmarks (HumanEval and MBPP) show that this PKG-based approach achieves notable improvements in pass@1 accuracy and reduces assertion errors, outperforming standard Retrieval-Augmented Generation (RAG) methods, especially when using tree pruning to eliminate extraneous context.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This work introduces an innovative use of knowledge graphs to represent programming knowledge, advancing the precision of semantic retrieval in code generation. By integrating hierarchical relationships within the PKG, this approach opens new avenues for enhancing retrieval-augmented models in the code generation domain.\\n\\nThrough evaluations on HumanEval and MBPP benchmarks, the method demonstrates improvements over existing RAG approaches, including NoRAG and other RAG methods. The error analysis and topic-based performance breakdown add credibility, highlighting specific problem types that benefit from the PKG-based approach.\", \"weaknesses\": \"The PKG generation and retrieval processes involve multiple modules and steps that contribute to system complexity and potentially high computational cost. However, the paper does not delve into the scalability or efficiency implications of these processes. A discussion on computational trade-offs would provide a more complete assessment of its feasibility.\\n\\nThe re-ranking mechanism in the paper is used to enhance retrieval accuracy, but the decision-making process is not detailed enough. The author should provide a more comprehensive description and explanation of the method. \\n\\nAlthough PKG generally performs well, the paper lacks an in-depth analysis of specific categories (e.g., string manipulation and data structure tasks) where PKG retrieval does not yield improvements. A focused examination of these cases could reveal limitations inherent to graph-based retrieval for these types, thereby helping to identify conditions under which PKG may be less effective.\", \"questions\": \"See the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Assessing the Computational Trade-offs and Limitations of PKG in Code Generation: Efficiency, Scalability, and Task-Specific Challenges\", \"comment\": \"# Reviewer Concerns:\\n\\n## 1. The PKG generation and retrieval involve multiple modules, adding complexity and cost. However, the paper lacks discussion on scalability and computational trade-offs, limiting its feasibility assessment.\\n\\nThank you for your insightful observation. We will add the cost trade-off when we use PKG compared to current RAG approaches. We will explain these details in the paper by the Nov 27th in a new section in appendix (cost trade-off section):\", \"we_can_consider_two_versions_of_pkgs_for_discussion\": \"### **PKG with enhancer module, which aims to enhance the graph with doc-string data**:\", \"step1\": \"downloading dataset (143,000 QA, 280MB): negligible, a few seconds.\", \"step2\": \"index dataset: 44 minutes.\\n\\nIn general for 143,000 Q&A:\", \"step3\": \"Code block extraction: 25 minutes.\", \"step4\": \"enhanced PKG with doc_strings using LLM on one A100: 82 hours.\", \"step5\": \"encode PKG using voyage embeddings using API calls: 4 hours.\", \"step6\": \"generating neo4j graph: 33 minutes.\\n\\nIn general for 143,000 Q&A (~500,000 nodes will be added in graph):\", \"it_took_around\": \"***44 minutes.***\", \"total_storage\": \"***315mb*** (index and text data)\\n\\nThe results in Tables 1 and 2 show that removing the func-block has minimal impact on performance. While our approach takes an additional hour to process the dataset compared to embedding-based RAG methods like Voyage Embeddings, it leads to a significant performance improvement.\\n\\n**graph updates**: \\n\\nNeo4j's semantic vector indexing ensures efficient additions, with time complexities of O(logN) for nodes and O(logM) for relationships, where \\ud835\\udc41 and \\ud835\\udc40 are the total nodes and relationships. \\n\\n**maintenance**: \\nThe graph remains useful as long as its content are not deprecated in the target programming language.\\n\\n## 2. The re-ranking mechanism in the paper is used to enhance retrieval accuracy, but the decision-making process is not detailed enough. The author should provide a more comprehensive description and explanation of the method. \\n\\nThank you for your consideration. Our re-ranking mechanism involves three steps:\", \"ast_validation\": \"Ensuring syntactic correctness via Abstract Syntax Tree (AST) checks.\", \"execution_testing\": \"Running valid answers and excluding those with runtime errors.\", \"embedding_similarity\": \"Selecting the answer most similar to the query based on embeddings.\\n\\nDetails are in Section 2.3. Let us know which parts require further clarification.\\n\\n## 3. While PKG performs well overall, the paper lacks analysis of categories like string manipulation and data structures where PKG retrieval falls short. Examining these cases could uncover limitations of graph-based retrieval and identify conditions where PKG is less effective.\\n\\nThank you for your comment. We agree that exploring cases where PKG retrieval does not improve results, especially in string manipulation and data structure tasks, could provide valuable insights into the limitations of graph-based retrieval and how LLMs interpret input data.\\n\\nIn string manipulation tasks, the challenge lies in the model's focus on semantic meaning rather than string structure.\", \"example_problem\": \"\", \"write_a_python_code_to_convert_lowercase_to_uppercase_and_vice_versa\": \"\\\"Hello\\\" to \\\"hELLO\\\" and \\\"pYthon\\\" to \\\"PyTHON.\\\"\", \"challenges\": \"Embedding Model\\u2019s Focus on Semantics: In RAG, the embedder retrieves content based on meaning, not formatting. It may focus on \\\"hello\\\" as a greeting rather than the case transformation.\\nLLM\\u2019s Tokenization and Semantic Bias: LLMs tokenize based on meaning, not formatting, making case transformation difficult as \\\"Hello\\\" and \\\"hello\\\" are treated the same.\\nIn summary, both RAG retrieval and LLM tokenization prioritize semantics over formatting, complicating string manipulation tasks and limiting PKG\\u2019s effectiveness in these cases.\"}",
"{\"title\": \"New experiments have been done.\", \"comment\": \"We sincerely appreciate your valuable feedback regarding the performance of PKG on textual data. To address this concern, we conducted a new experiment specifically designed to evaluate PKG\\u2019s performance on text-centric content. Using the tutorial data source from the Code-RAG-Bench paper, we extracted the JSON representations of Python-related content (a process that took approximately three hours to get JSONs from an LLM) and constructed a graph based on the hierarchical representations of these JSON objects.\\nOur results indicate a significant improvement in performance compared to the standard RAG approach reported for StarCoder2-7B on the same data source. Notably, for some models, such as Llama3, PKG applied to textual content even outperforms PKG applied to code content, supporting the validity of your earlier comments.\\nThe detailed findings of this experiment can be found in Section 7.2 of the Appendix. We hope this additional evidence addresses your concerns, and we kindly ask you to reconsider your score in light of these results.\"}",
"{\"summary\": \"The paper proposes a novel code generation framework that leverages a Programming Knowledge Graph for improved context retrieval, reducing irrelevant data with tree pruning and re-ranking techniques.\\nBy implementing function- and block-level retrieval that provides fine-grained, contextually relevant code, the framework enables more precise and relevant context for code generation tasks, achieving significant performance gains on benchmarks like HumanEval and MBPP.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The PKG approach adds a structured layer to context retrieval, improving relevance in code generation.\\n2. Tree pruning and re-ranking help eliminate irrelevant information, enhancing the quality of generated code.\\n3. Through function- and block-level code, the framework could provide highly relevant and precise context.\\n4. The approach demonstrates considerable improvements on established benchmarks like HumanEval and MBPP.\", \"weaknesses\": \"1. Building and maintaining the Programming Knowledge Graph may be resource-intensive and require domain expertise.\\n2.The framework\\u2019s effectiveness may be constrained to specific programming languages (e.g. Python) of code tasks.\\n3.The paper could benefit from a deeper exploration where PKG and retrieval mechanisms fail to improve or potentially hinder code generation quality.\", \"questions\": \"1. What is the computational cost of building and updating the PKG, and how frequently does it need maintenance to remain effective?\\n2. How does the additional complexity introduced by tree pruning and re-ranking affect code generation performance, and is this overhead manageable?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Exploring the Reliability, Errors, and Comparative Effectiveness of PKG-Based Contextual Retrieval in Code Generation.\", \"comment\": \"# Reviewer concerns:\\n## 1. There still exists a gap between ideal ranker & re-ranker, what was the reason for it. \\nThank you for your thoughtful observation. The gap between the ideal ranker and the re-ranker arises because the ideal ranker has access to information that the re-ranker does not. \\nThe ideal ranker can always pick the correct solution from the candidate pool if one exists because it assumes that we know beforehand whether each solution is correct or incorrect. It acts as an upper bound for the re-ranker in our work. \\nIn contrast, the re-ranker operates under real-world conditions, where it does not have access to ground-truth labels. Instead, it must evaluate each solution based on available signals like Abstract Syntax Trees (ASTs), execution results, and embedding comparisons to select the best candidate. Without direct labels, the re-ranker\\u2019s choice is based on these approximations, which can lead to some incorrect selections and thus a performance gap compared to the ideal ranker. \\n\\n## 2. How reliable is the result? Based on multiple generations with low temperatures?\\nTo ensure result reliability, all experiments were conducted with a temperature of 0, allowing for deterministic outputs. Additionally, all prompts used in these experiments are provided in the appendix (sections 7.1-7.4), enabling full reproducibility of the results. \\n\\n## 3. Samples in MBPP & HumanEval present with misaligned or incorrect NL, under which error is that catered to?\\nThanks for your attention, can you clarify which examples are misaligned? We used the version of HumanEval samples available in the original Huggingface dataset. For MBPP, we use a filtered version curated by the CodeRAG-Bench authors[1], who removed samples with fewer test cases, as these were deemed less reliable. We are not sure which examples do you mean. If you provide more details, we will answer more specifically. \\n[1] Zora Zhiruo Wang, Akari Asai, Xinyan Velocity Yu, Frank F Xu, Yiqing Xie, Graham Neubig, and Daniel Fried. Coderag-bench: Can retrieval augment code generation? \\n\\n## 4. When block-PKG performs better than func-PKG then why is ranking done for considering func-PKG?\\nThank you for your insightful question. The re-ranking is applied to four approaches: NoRAG, BM25, Func-PKG, and Block-PKG. We include Func-PKG in the evaluation because, in certain cases, it outperforms Block-PKG. By incorporating Func-PKG, we aim to leverage the strengths of this approach where it shows an advantage. \\n\\n## 5. Need to show cases of extraction with PKG providing better function than current RAG. \\nTo better illustrate how PKG outperforms current RAG approaches, we have already included results from standard RAG methods like BM25 and Voyage Embedding in our comparison. In the appendix (section 7.4), we provided examples showing the distinctions between PKG and NoRAG. We will further strengthen this section by adding more examples that compare PKG to BM25 and Voyage RAG, making PKG's advantages even clearer. \\n\\n## Styling format suggestions:\\nThank you for your detailed feedback. We will address your comments and incorporate the necessary revisions in the updated version of the paper by November 27th.\"}",
"{\"title\": \"New version is uploaded.\", \"comment\": \"Dear Reviewer,\\n\\nWe would like to kindly inform you that a revised version of the paper has been uploaded. In this updated version, we have extended our evaluation to include a new text-centric data source (Python tutorials). Our results demonstrate that the proposed approach effectively leverages text-centric data to retrieve more precise and relevant content, leading to measurable performance improvements in the code generation task. You can find the updated results in Section 7.2 of the Appendix.\"}",
"{\"comment\": \"Dear author, thank you for your reply. Your reply clarified some concerns, I think my score is appropriate for your current article, so I maintain the score.\"}"
]
} |
EHYbqCDRtM | Verbalized Graph Representation Learning: A Fully Interpretable Graph Model Based on Large Language Models Throughout the Entire Process | [
"Xingyu Ji",
"Jiale Liu",
"Lu Li",
"Maojun Wang",
"Zeyu Zhang"
] | Representation learning on text-attributed graphs (TAGs) has attracted significant interest due to its wide-ranging real-world applications, particularly through Graph Neural Networks (GNNs). Traditional GNN methods focus on encoding the structural information of graphs, often using shallow text embeddings for node or edge attributes. This limits the model to understand the rich semantic information in the data and its reasoning ability for complex downstream tasks, while also lacking interpretability. With the rise of large language models (LLMs), an increasing number of studies are combining them with GNNs for graph representation learning and downstream tasks. While these approaches effectively leverage the rich semantic information in TAGs datasets, their main drawback is that they are only partially interpretable, which limits their application in critical fields. In this paper, we propose a verbalized graph representation learning (VGRL) method which is fully interpretable. In contrast to traditional graph machine learning models, which are usually optimized within a continuous parameter space, VGRL constrains this parameter space to be text description which ensures complete interpretability throughout the entire process, making it easier for users to understand and trust the decisions of the model. We conduct several studies to empirically evaluate the effectiveness of VGRL and we believe these method can serve as a stepping stone in graph representation learning. | [
"large language models",
"fully interpretable",
"graph representation learning"
] | https://openreview.net/pdf?id=EHYbqCDRtM | https://openreview.net/forum?id=EHYbqCDRtM | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"rgE4zQXK0Q",
"nQKOJn4GT4",
"n70DtKV7wc",
"SJE2kjD35r",
"CXBmTmE8ZA"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1729365884134,
1730701884802,
1731448063869,
1730414572005,
1730589679321
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1329/Reviewer_gEHU"
],
[
"ICLR.cc/2025/Conference/Submission1329/Reviewer_4JTH"
],
[
"ICLR.cc/2025/Conference/Submission1329/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1329/Reviewer_U9Wf"
],
[
"ICLR.cc/2025/Conference/Submission1329/Reviewer_57YY"
]
],
"structured_content_str": [
"{\"summary\": \"This article mainly studies how to improve the explainability of Graph Neural Networks (GNNs), particularly focusing on the joint explainability across three levels: input, training process, and decision making. The authors propose to address this problem in the text space and put forward a framework that utilizes a Large Language Model (LLM) as both a predictor and an optimizer to generate explanations in natural language.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. This article explores an interesting question: how to explore the explainability of traditional models, such as GNNs, in the text space? To address this question, the article proposes a viable solution through performing optimization in the text space.\", \"weaknesses\": \"1. This article's writing suffers from vagueness, often employing ambiguous sentences that hinder readers from effectively grasping the technical details. For instance, the use of \\\"stepping stone\\\" in line 31 and \\\"deeper insight\\\" in line 44 lack concrete explanations or specific examples. Additionally, the use of certain technical terms needs to be more precise. For example, \\\"training dynamics\\\" in line 109 is mentioned without any subsequent elaboration or relevant content in the later sections. This lack of clarity and precision in language can significantly impede the reader's understanding of the proposed methods and contributions.\\n2. For lines 85-100, it's confusing that authors directly turn to the discussion of Graph LLMs without motivating its relationship to the explainability of GNNs\\n3. I strongly recommend that authors check the rigorous definition of explainability in machine learning, such as [1]. The author seems to misunderstand the concept of explainability. They assume that expressing the prediction logic in natural language automatically equates to explainability. However, this is not the case. Natural language can generate irrelevant or even misleading explanations that do not reflect the true underlying reasoning of the model. \\n4. Following the previous points, there's no experiment evaluating the explainability of the model but focusing on the accuracy of prediction. A case study is not a rigorous way to check the effectiveness. \\n5. No explainability-related work is considered in the related works part. Specifically, some highly relevant works like [2] are omitted. In terms of prompt optimization, the general philosophy is highly similar to [3]. \\n6. The methodology part is hard to follow. I strongly recommend that the authors summarize it into some algorithms. \\n7. The theoretical part is a simple replication of the one in [4], which can't well explain the empirical part. \\n\\n[1] Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8), 832.\\n\\n[2] Zhang, J., Liu, J., Luo, D., Neville, J., & Wei, H. (2024). LLMExplainer: Large Language Model based Bayesian Inference for Graph Explanation Generation. arXiv preprint arXiv:2407.15351.\\n\\n[3] Yuksekgonul, M., Bianchi, F., Boen, J., Liu, S., Huang, Z., Guestrin, C., & Zou, J. (2024). TextGrad: Automatic\\\" Differentiation\\\" via Text. arXiv preprint arXiv:2406.07496.\\n\\n[4] He, X., Bresson, X., Laurent, T., Perold, A., LeCun, Y., & Hooi, B. (2023). Harnessing explanations: Llm-to-lm interpreter for enhanced text-attributed graph representation learning. arXiv preprint arXiv:2305.19523.\", \"questions\": \"1. I do not quite understand the whole process of the methodology. Could you summarize it as an algorithm?\\n2. What's the \\\"embedding\\\" in line 268? How do you implement it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper discussses a training-free LLM framework VGRL for node classification on graph structured data. The core idea is to optimize the verbalization prompt called LLM optimizer.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I don't recognize a notable strength of this paper.\", \"weaknesses\": \"1. The paper seems to combine the recent work [1] and verbalized prompt based node classification work [2] together and only shows improvements on one obsolete graph benchmark: Cora.\\n\\n2. The proposed work doesn't seem to be tailored for graph tasks, which doesn't solve challenges specific on graph strctured data. And author also don't mention graph in the two challenges they proposed. \\n\\n[1] Verbalized machine learning: Revisiting machine learning with language models\\n[2] HARNESSING EXPLANATIONS: LLM-TO-LM INTERPRETER FOR ENHANCED TEXT-ATTRIBUTED GRAPH REPRESENTATION LEARNING\", \"questions\": \"Same as weakenss.\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"As mentioned by the other reviewer, I agree that the theoretical contribution is identical to the paper \\\"HARNESSING EXPLANATIONS: LLM-TO-LM INTERPRETER FOR ENHANCED TEXT-ATTRIBUTED GRAPH REPRESENTATION LEARNING\\\" appendix A at ICLR 2024.\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"The paper introduces an interpretable framework, VGRL, which constrains the model parameter space to text descriptions to ensure full interpretability throughout the entire process. Specifically, VGRL performs node classification on TAGs by utilizing a frozen LLM as an enhancer, predictor, optimizer, and summarizer to simulate iterative training.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"This paper introduces a new way to utilize LLMs for interpretable graph learning.\", \"weaknesses\": [\"The experiments include only a single dataset. The authors validate their method solely on the Cora dataset, which is a small TAG. Experiments on additional TAGs, such as Citeseer and ogbn-arxiv, should be considered.\", \"Lack of baselines and SOTA models. The authors compare their method only with a vanilla LLM, without any comparison to existing related work. This omission prevents readers from assessing whether the proposed method represents an improvement and by how much compared to existing work.\", \"Section 6 (THEORETICAL ANALYSIS) and Appendix A are nearly identical to Section 4.4 and Appendix A in [1], with only variable names changed, which constitutes plagiarism. It is recommended that the authors address this issue seriously and provide clarification.\", \"[1] Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning, ICLR 2024, https://arxiv.org/pdf/2305.19523\"], \"questions\": \"See Weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper tries to solve two major challenges in representation learning on text-attributed graphs, including the interpretability of models and efficiency in model optimization. The former is solved by creating intuitive connections and generating textual explanations, while the latter is addressed by leveraging prompt engineering approaches.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The authors seek to develop a fully interpretable method for graph representation learning, which a promising research direction.\\n\\nThe authors provide open-source code for peer review.\", \"weaknesses\": \"The presentation requires significant improvement. For example,\\n\\n In line 141, \\u201cAnd iterates over the input mini-batch B one-pass input.\\u201d \\n In line 144, the definition of one-hop neighbors. \\n\\nIn the experimental section, the authors present only ablation studies without any comparison to SOTA methods.\\n\\nWhile the authors emphasize interpretability and efficiency as their primary contributions, they provide no empirical results or theoretical analysis to substantiate these claims.\\n\\nIn the theoretical analysis section, the theorem and proof closely resemble existing work [1] but lack proper citation, which may constitute plagiarism.\\n\\n[1] Xiaoxin He, Xavier Bresson, Thomas Laurent, Adam Perold, Yann LeCun, and Bryan Hooi. Harnessing explanations: Llm-to-lm interpreter for enhanced text-attributed graph representation learning. arXiv preprint arXiv:2305.19523, 2023.\", \"questions\": \"See weeknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EGxgZzDODh | Neural Probabilistic Logic Learning for Knowledge Graph Reasoning | [
"Fengsong Sun",
"Jinyu Wang",
"Zhiqing Wei",
"Xianchao Zhang"
] | Knowledge graph (KG) reasoning is a task that aims to predict unknown facts based on known factual samples. Reasoning methods can be divided into two categories: rule-based methods and KG-embedding based methods. The former possesses precise reasoning capabilities but finds it challenging to reason efficiently over large-scale knowledge graphs. While gaining the ability to reason over large-scale knowledge graphs, the latter sacrifices reasoning accuracy. This paper aims to design a reasoning framework called Neural Probabilistic Logic Learning(NPLL) that achieves accurate reasoning on knowledge graphs. Our approach introduces a scoring module that effectively enhances the expressive power of embedding networks. We strike a balance between model simplicity and reasoning capabilities by incorporating a Markov Logic Network based on variational inference. We empirically evaluate our approach on several benchmark datasets, and the experimental results validate that our method substantially enhances the accuracy and quality of the reasoning results. | [
"Knowledge graph reasoning",
"embedding",
"rule-based",
"variational inference"
] | https://openreview.net/pdf?id=EGxgZzDODh | https://openreview.net/forum?id=EGxgZzDODh | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zDUViFm5jw",
"uFamQxO5pr",
"EjasWb6FpZ",
"4vQZrVwqwE",
"0LzisowQh7"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1729953749756,
1730458093714,
1731828530318,
1730291291369,
1730447623480
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5382/Reviewer_1oQB"
],
[
"ICLR.cc/2025/Conference/Submission5382/Reviewer_4F7f"
],
[
"ICLR.cc/2025/Conference/Submission5382/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5382/Reviewer_VfE1"
],
[
"ICLR.cc/2025/Conference/Submission5382/Reviewer_WXKG"
]
],
"structured_content_str": [
"{\"summary\": \"The paper studies the popular problem of knowledge graph completion. The paper contrasts rule based methods, on the one hand, with embedding based methods, on the other hand. Rules are said to be more accurate but less efficient, while embeddings are said to be less accurate and more efficient. The paper therefore proposes a strategy for combining the advantages of both, by using a variational approximation of Markov logic networks.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Studying new methods in which embedding and rule based methods can be combined is clearly of interest.\\n\\nThe experimental results are very good, spectacular even.\", \"weaknesses\": \"The core idea of this paper is identical to that of the pLogicNet paper, which also proposes a variational approximation of Markov logic networks based on embeddings. Furthermore, ExpressGNN builds on pLogicNet by using GNNs instead of embeddings, and this paper similarly analyses a variant based on GNNs. It is not clear what is novel about the proposed model compared to these two earlier models. Worryingly, while pLogicNet and ExpressGNN are cited in the paper, no mention at all is made of the close correspondence. All that is said about pLogicNet, for instance, is that it is a \\\"probabilistic logic reasoning network ... demonstrating exemplary performance\\\". If there is a conceptual difference with pLogicNet which I missed, the paper should have discussed this explicitly.\\n\\nThe experimental results are substantially better than those of existing models, including those of pLogicNet. Given the close similarity with pLogicNet, this makes the validity of the results questionable. At a minimum, the paper should have analysed the differences with pLogicNet. For instance, to what extent can the performance differences be explained by the fact that both methods start from a different set of rules? If this does not explain the difference, then what is responsible for this huge performance gap?\\n\\nThe GNN variant of the model is introduced while discussing the experimental results, but is never properly explained. Table 3 shows that it has only a quarter of the parameters of ExpressGNN, but without further details, this seems to be a matter of different hyper parameter tuning, rather than any genuine difference.\\n\\nThe paper is poorly written. For instance, even the motivation in the introduction doesn't really make sense. Knowledge graphs are said to capture \\\"rich semantics\\\" and offer \\\"more expressive\\\" representations than traditional methods, which doesn't make sense to me. The technical details, for instance in Section 4.1, are very hard to follow.\", \"questions\": \"Why did you not discuss the close similarity with pLogicNet and ExpressGNN in the paper?\\n\\nWhy did you not analyse where the performance improvements compared to pLogicNet and ExpressGNN are coming from, given the very close similarity with these models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper explores the combination of embedding-based methods with rule-based reasoning approaches to address their individual shortcomings when used separately. The authors introduce Neural Probabilistic Logic Learning (NPLL), a framework that combines the strengths of embeddings and rule-based reasoning, which purportedly achieves efficient, large-scale knowledge graph reasoning. NPLL seemingly demonstrates high reasoning performance, even in data-scarce conditions, balancing model size with reasoning capability to enable practical applications.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper is generally well-written, though the clarity of some parts could be improved.\\n\\nThe experiments purportedly show huge gains by the paper's approach on standard benchmarks.\", \"weaknesses\": \"This paper does not clearly position its technical contributions with respect to prior research. Its approach resembles that of pLogicNet (Qu and Tang, 2019) and ExpressGNN (Zhang et al., 2020), both of which the paper cites. Both pLogicNet and ExpressGNN employ the same variational-EM approach as this paper, as well as the computational simplifications of mean-field approximation and pseudo-log-likelihood optimization. In light of these existing works, the primary contribution of this paper appears to be the introduction of the triplet scoring function in Equation 7, which is essentially a simple feed-forward network. Relative to prior work, this contribution seems incremental at best. Furthermore, in the context of existing embedding-based approaches, the triplet scoring function (Equation 7) may be redundant, as current embedding-based methods can produce scores using the same inputs (embeddings for entities and relations).\\n\\nAlthough the empirical results show significant improvements over baseline scores, the paper\\u2019s exposition does not clearly explain how these improvements are achieved through the authors\\u2019 approach.\", \"questions\": \"Line 101: The paper claims that it \\\"is significantly more effective for knowledge graph reasoning\\\" than the prior works mentioned in lines 84-101 without clearly describing how it is technically superior. Could the authors please elaborate on their technical contributions relative to each of the related works?\\n\\nLine 187, Section 4:\\nCould the authors please compare and contrast their NPLL model with the closely related models pLogicNet and ExpressGNN? Which specific components introduced in the paper account for the stellar empirical results reported?\\n\\nLine 324, Section 5:\\nWhich empirical results in this section support the authors' answer to the above question regarding the most effective components of their approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper proposes a rule-based KG reasoning method. The main idea is based on markov logic network. Compared with baseline method, the proposed method is more effective and performs the best over several benchmarks. In particular, the proposed NPLL is effective in data-scarse scenarios.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method outperforms the baseline methods by a large margin.\\n2. The proposed method also performs well in the data-scarce cases.\\n3. The proposed method is parameter efficient.\\n4. Code is provided.\", \"weaknesses\": \"1. This paper is ill-written. The motivation is extremely unclear. Over the entire paper, it is hard for me to capture what problem this paper want to address and how the proposed method is motivated.\\n2. The advantage of NPLL over other methods is not well discussed. The authors frequently claim that NPLL is more effective than the others. However, in what aspects and why? I can't understand.\\n3. As for the methodology, there lack of an overall picture of the whole framework. Just talk about what they do with many details for about 3.5 pages. The visualization of the framework in Figure 1 is also not well illustrated.\\n4. For the contributions, I don't know why the second and third properties are important. In other words, how these properties or designs benefit the KG reasoning problem is not clear.\\n5. Section 2 mentions interpretability, but experimental results do not show this point.\\n6. There are many typos and inappropriate expressions. For example,\\n- Section 3 uses mixed expressions of italic and normal fonts, e.g., E and $E$, L and $L$, fi and $f_i$.\\n- In line 142, what is $y_i'$?\\n- exp in Equations 1 and 2 with different fonts.\\n7. The font sizes in figure 2 and 3 are too small.\", \"questions\": \"Please check the questions in weakness points.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"1. The paper introduces Neural Probabilistic Logic Learning (NPLL), a rule-based method for knowledge graph reasoning. NPLL represents knowledge using a Markov Logic Network (MLN), enhancing the expressiveness of embedding networks. Through variational inference, NPLL accurately infers unknown facts and introduces a scoring module to improve reasoning accuracy in knowledge graphs.\\n\\n2. The authors conduct extensive experiments across various benchmark datasets, including YAGO3-10, YAGO37, Codex-L, WN18RR and FB15k-237. The results demonstrate that NPLL achieves superior reasoning performance, surpassing other methods on large-scale and domain-specific datasets. This validation highlights NPLL\\u2019s effectiveness in complex knowledge graph reasoning tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The methodological improvement introduced in this paper that leverages an embedding-based scoring module is straightforward. However, this simplicity contributes to the model\\u2019s robustness and ease of implementation, and the results achieved are notably impressive.\\n\\n2. The experimental evaluation is thorough, covering a wide range of benchmark knowledge graphs. This comprehensive testing approach not only underscores the model\\u2019s versatility but also consistently demonstrates superior performance across diverse datasets, reinforcing the robustness and broad applicability of the proposed method.\", \"weaknesses\": \"## 1 Novelty Issue\\n1. The proposed methodology closely resembles the approach used in ExpressGNN [1], with the primary difference being the addition of a scoring module on factual triples $(e_h, l, e_t)$. It remains unclear what further distinctions, if any, exist between this model and ExpressGNN, raising concerns regarding the novelty of this contribution.\\n\\n## 2 Insufficient and Unjustified Experimentation\\n1. Two model variants, NPLL-basic and NPLL-GNN, are proposed, with reasoning results provided for each (Table 2). However, no explanation is given for the significantly lower accuracy of NPLL-GNN compared to NPLL-basic, leaving questions about model performance unaddressed.\\n\\n2. The evaluation of data efficiency (Table 4) is limited to the FB15k-237 dataset, replicating results that have already been demonstrated in ExpressGNN. Since this work (NPLL) and ExpressGNN [1] follow the same framework (MLN), results on additional datasets would help clarify the model\\u2019s data efficiency. Additionally, the impact of varying data sizes on NPLL-basic performance is minimal (Table 4), but this observation is neither analyzed nor discussed.\\n\\n3. In the Related Work section, the authors claim that compared to embedding-based methods, NPLL enhances both interpretability and reasoning quality. However, no experimental evidence is provided to substantiate this claim.\\n\\n## 3 Representation\\n1. Improper use of mathematical symbols: The mathematical formulations are often imprecise, with symbols inconsistently defined or unclear. For example, the definitions of \\u201cfact\\u201d in the preliminary section are ambiguous, and the MLN setup and representation of unknown facts are incomplete. Including concrete examples would improve clarity and reader comprehension. In the Model section, certain notations (e.g., $u_g$, $u_k$ in Equation 8) are confusing and inadequately defined, making this section challenging to follow.\\n\\n2. The text and curves in the all figures, especially Figure 2 and 3, are difficult to read due to their small size, limiting accessibility to critical information.\\n\\n3. Significant details are missing in the methods section, such as specifics on model training and the E-step and M-step processes. This lack of detail, particularly compared to ExpressGNN, further underscores the concerns about novelty in this work.\\n\\n[1] Zhang, Y.; Chen, X.; Yang, Y.; Ramamurthy, A.; Li, B.; Qi, Y.; Song, L. Efficient Probabilistic Logic Reasoning with Graph Neural Networks. arXiv February 4, 2020. https://doi.org/10.48550/arXiv.2001.11850.\", \"questions\": \"1. Novelty: Beyond the addition of the scoring module, what are the other key differences between this approach and ExpressGNN? It would be helpful if the authors could elaborate on any unique elements or improvements this model brings, especially regarding interpretability, efficiency, or theoretical grounding.\\n\\n2. Impact of different data sizes on model performance: The experiments with varying data sizes in the FB15k-237 dataset reveal minimal impact on model performance, but the reasoning behind this result is not addressed. Could the authors provide an analysis of why this might be the case? It would also be valuable if additional insights could be given on the robustness of the model in data-scarce environments or if alternative metrics could show nuanced performance variations.\\n\\n3. Training time comparisons: Since NPLL utilizes rules derived from a pre-trained Neural-LP model on specific datasets, I suggest that the reported training time should also include the time required to train the Neural-LP model initially (Table 6).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EGjTCIcSnW | On the Robustness of Vision-Language Models Against Distractions | [
"Ming Liu",
"Hao Chen",
"Jindong Wang",
"Wensheng Zhang"
] | Although vision-language models (VLMs) have achieved significant success in various applications such as visual question answering, their resilience to prompt distractions remains as an under-explored area. Understanding how distractions affect VLMs is crucial for improving their real-world applicability, as inputs could be filled with noisy and irrelevant information in many practical scenarios. This paper aims to assess the robustness of VLMs against both visual and textual distractions in the context of science question answering. Built on the \emph{ScienceQA} dataset, we developed a new benchmark that introduces distractions in both the visual and textual contexts. To evaluate the reasoning capacity of VLMs amidst these distractions, we analyzed the performance of ten state-of-the-art models, including GPT-4o. Our findings reveal that most VLMs are vulnerable to various types of distractions, experiencing noticeable degradation in reasoning capabilities when confronted with distractions. Notably, models such as InternVL2 demonstrates a higher degree of robustness to these distractions. We also found that models exhibit greater sensitivity to textual distractions than visual ones. Additionally, we explored various mitigation strategies, such as prompt engineering, to counteract the impact of distractions. While these strategies improved model resilience, our analysis shows that there remain significant opportunities for further improvement. | [
"Model Evaluation",
"Vision-Language Models",
"Multimodal",
"Distraction Robustness"
] | https://openreview.net/pdf?id=EGjTCIcSnW | https://openreview.net/forum?id=EGjTCIcSnW | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"C1n4QLOb74"
],
"note_type": [
"comment"
],
"note_created": [
1729568539794
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"desk_reject_comments\": \"Margin violation -- this paper has reduced margins on both the left and right sides to fit more content into 10 pages.\", \"title\": \"Submission Desk Rejected by Program Chairs\"}"
]
} |
|
EG9nDN3eGB | A Graph Enhanced Symbolic Discovery Framework For Efficient Logic Optimization | [
"Yinqi Bai",
"Jie Wang",
"Lei Chen",
"Zhihai Wang",
"Yufei Kuang",
"Mingxuan Yuan",
"Jianye HAO",
"Feng Wu"
] | The efficiency of Logic Optimization (LO) has become one of the key bottlenecks in chip design. To prompt efficient LO, previous studies propose using a key scoring function to predict and prune a large number of ineffective nodes of the LO heuristics. However, the existing scoring functions struggle to balance inference efficiency, interpretability, and generalization performance, which severely hinders their application to modern LO tools. To address this challenge, we propose a novel data-driven circuit symbolic learning framework, namely CMO, to learn lightweight, interpretable, and generalizable scoring functions. The major challenge of developing CMO is to discover symbolic functions that can well generalize to unseen circuits, i.e., the circuit symbolic generalization problem. Thus, the major technical contribution of CMO is the novel Graph Enhanced Symbolic Discovery framework, which distills dark knowledge from a well-designed Graph Neural Network (GNN) to enhance the generalization capability of the learned symbolic functions. To the best of our knowledge, CMO is *the first* graph-enhanced approach for discovering lightweight and interpretable symbolic functions that can well generalize to unseen circuits in LO. Experiments on three challenging circuit benchmarks show that the *interpretable* symbolic functions learned by CMO outperform previous state-of-the-art (SOTA) GPU-based and human-designed approaches in terms of *inference efficiency* and *generalization capability*. Moreover, we integrate CMO with the Mfs2 heuristic---one of the most time-consuming LO heuristics. The empirical results demonstrate that CMO significantly improves its efficiency while keeping comparable optimization performance when executed on a CPU-based machine, achieving up to 2.5× faster runtime. | [
"Chip Design",
"Logic Optimization",
"Symbolic Regression",
"Knowledge Distillation"
] | Accept (Poster) | https://openreview.net/pdf?id=EG9nDN3eGB | https://openreview.net/forum?id=EG9nDN3eGB | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yhnRBTvG9G",
"wyDtbm6fJM",
"wlEdGbURqe",
"wdJHO4LV6a",
"vGCWctDda2",
"uyEXk3mLHD",
"sEgqLjVGs1",
"o03XV6fRSN",
"nc3VUd7atd",
"mKNUbLbTMo",
"l6U94EiPGL",
"iheK6KpYhQ",
"hX7g1BsTp0",
"gx3Xxg2G8r",
"cvrptrEjTt",
"cj1ZyJ6wfj",
"aGvzYOMTSD",
"YlqJsoLDJK",
"TnQmiP0EzT",
"SAmqtsPeHF",
"QOaOY4uWXT",
"Q404WQaABj",
"MyaNNoJIfJ",
"MPG8taUmOq",
"LlHvKnfwCZ",
"L4aWjlgRmD",
"JBI5g6Yly1",
"HcnWNKU4Pc",
"G6B0FbodF8",
"FYUdC5KlbW",
"FYGodQ8yge",
"ClHYN1MxGr",
"BJ4Y4dMJX7",
"AG98Yyr5W9",
"8u8URbDStY",
"8KUyuULKxD",
"7bl1l4G1rh",
"7CMOvN5w54",
"6GdW3fAHh7",
"5IRBa8bsW6",
"3ceOGEw1Rx",
"3Jmd0UOkAy",
"0BQR7tBTPI"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment"
],
"note_created": [
1732258695388,
1732774005055,
1732256428307,
1733106552599,
1733210723309,
1732256966726,
1733189281467,
1732459428889,
1730147861482,
1733282752028,
1733211237454,
1733106638308,
1733189330684,
1732258712404,
1732256792850,
1732950226172,
1732257016046,
1732459273073,
1734469479060,
1733206191673,
1732256851739,
1732256673293,
1732773886866,
1732257536646,
1730031016665,
1733284478669,
1732257477544,
1732258611204,
1733200906750,
1732459349083,
1733222137515,
1733284532416,
1733284510402,
1733193697372,
1732257146647,
1732950341521,
1733159350182,
1730752134077,
1732256728018,
1733106597222,
1732950287285,
1737523824548,
1732773957288
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Reviewer_oLyS"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Area_Chair_vXri"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Reviewer_kVA7"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Reviewer_kVA7"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Reviewer_oLyS"
],
[
"ICLR.cc/2025/Conference/Submission7225/Reviewer_s1pC"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7225/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"### Weakness 4\\n> **Are the datasets mixed to train a single GNN, or are three separate GNNs trained for each dataset?**\\n\\nIn our approach, we use the leave-one-out generalization evaluation strategy and **train separate GNNs for each dataset.** For instance, given the EPFL benchmark, we construct a Log2 dataset by designating Log2 as the testing dataset and using the remaining circuits (including Hyp and Square) in the EPFL benchmark as the training dataset. A GNN model is then trained on the training dataset and employed to discover symbolic functions for the Log2 dataset.\\n\\n### Question 1\\n> **Can CMO generalize to other logic optimization methods?**\\n\\n**Yes, our CMO framework can generalize to other logic optimization (LO) methods.** The common LO heuristics include **Rewrite [5], Refactor [6], Resub [7], and Mfs2 [8]**, which **all follow the paradigm shown in Figure 5 in the initial submission [1]**. Due to time constraints, we only tested our method on **the most time-consuming heuristics, i.e., resub, among all the heuristics (see Table b).** However, we believe that our method can generalize to other heuristics as they follow the same paradigm and are only different in the node-level transformations.\\n\\nTo verify whether our method can generalize to the resub heuristic, our CMO **first follows [1] to collect training dataset** $\\\\mathcal{D} = \\\\lbrace \\\\textbf{x}\\\\_i, y_i \\\\rbrace_{i=1}^N $. The node features are obtained from an AIG that contains structural and semantic information, and the labels are collected based on the effectiveness of the node-level transformations. **Then we train a GNN model** on the training dataset and employ our GESD framework to distill a symbolc function from the teacher GNN model. **Finally, we evaluate the generalization performance of the learned symbolic functions.** The results in Table c demonstrate that our CMO **achieves an average prediction recall of 85% on the test circuits.** Therefore, we can conclude that our CMO effectively generalizes to other logic optimization heuristics such as Resub.\\n\\n**Table b:** We analyze the runtime of commonly used LO heuristics on six challenging open-source circuits. The ratio denotes the ratio of the runtime to that of the Rewrite heuristic.\\n|Avg Time Ratio to Rewrite | | | | | |\\n|------------|---------|---------|---------------------------|-------|-------|\\n| Heuristics | Rewrite | Balance | Refactor | Resub | Mfs2 |\\n| Time Ratio | 1 | 0.05 | 1.21 | **73.44** | 30.94 |\\n\\n**Table c:** We evaluate our CMO, COG, and Effisyn on six challenging open-source circuits. The results demonstrate that our CMO can generalize to the pre-mapping heuristic Resub.\\n| | Hyp | Multiplier | Square | Des_perf | Ethernet | Conmax | Average |\\n|------------|-------|------------|--------|----------|----------|-----------|----------|\\n| Method | Recall | Recall | Recall | Recall | Recall | Recall | Recall |\\n| COG | 0.90 | **0.95** | 0.82 | **0.79** | **0.99** | 0.76 | **0.87** |\\n| Effisyn | 0.67 | 0.20 | 0.63 | 0.46 | 0.82 | 0.020 | 0.47 |\\n| CMO (ours) | **0.90** | 0.88 | **0.82** | 0.65 | 0.91 | **0.92** | **0.85** |\\n\\n[5]. Bertacco, Damiani. The disjunctive decomposition of logic functions. 1997 Proceedings of IEEE International Conference on Computer Aided Design (ICCAD). IEEE, 1997: 78-82.\\n\\n[6]. Brayton R K. The decomposition and factorization of Boolean expressions. ISCA-82, 1982: 49-54.\\n\\n[7]. Brayton A M R. Scalable logic synthesis using a simple circuit structure. Proc. IWLS. 2006, 6: 15-22.\\n\\n[8]. Mishchenko A, et al. Scalable don't-care-based logic optimization and resynthesis. ACM Transactions on Reconfigurable Technology and Systems (TRETS), 2011, 4(4): 1-23.\"}",
"{\"title\": \"We eagerly await your feedback\", \"comment\": \"Dear Reviewer kVA7,\\n\\nWe are writing to gently remind you that **the deadline for the author-reviewer discussion period is approaching** (due on December 2nd). We eagerly await your feedback to understand if our responses have adequately addressed all your concerns. *If so, we would deeply appreciate it if you could raise your score*. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. We sincerely thank you once more for your insightful comments and kind support.\\n\\nBest,\\n\\nAuthors\"}",
"{\"comment\": \"# Response to Reviewer s1pC\\nWe thank the reviewer for the insightful and valuable comments. We respond to each comment as follows and sincerely hope that our rebuttal can properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. \\n\\n### Weakness 1.\\n> **The link between the two methods in sections 4.1 and 4.2 needs to be further elucidated, and it is not currently possible to visualize in the text the specific interrelationships between the two methods.**\\n\\nThanks for your valuable comments. The relationship between the two approaches described in Sections 4.1 and 4.2 is as follows: **The GESD framework discussed in Section 4.2 is the detailed description of the structural and semantic function learning component outlined in Figure 2 of Section 4.1 .** To provide clearer clarification of the connection between Sections 4.1 and 4.2, we have updated Figure 2 by changing the label 'semantic/structural function learning' to **'GESD for semantic/structural function learning.'** Additionally, we **added a textual description in Section 4.1 of the first revision (line 241) to explain how GESD works in our CMO framework.** For your convenience, we have included the relevant supplementary content from Section 4.1 below.\\n\\n**GESD for Symbolic Function Learning**\\nAfter decomposing the init feature into structural and semantic components, we collect structural data $\\\\mathcal{D}\\\\_{str} = \\\\lbrace\\\\textbf{x}\\\\_i^{str}, y_i\\\\rbrace_{i=1}^N $ and semantic data $\\\\mathcal{D}\\\\_{sem} = \\\\lbrace\\\\textbf{x}\\\\_i^{sem}, y_i\\\\rbrace_{i=1}^N $, where $\\\\textbf{x}\\\\_i^{str} $ refers to structural node feature and $\\\\textbf{x}\\\\_i^{sem} $ refers to semantic node feature. To capture structural information, we employ our Graph Enhanced Symbolic Discovery (GESD) framework to learn a mathematical symbolic function $f^{str}: \\\\mathbf{R}^{d} \\\\to \\\\mathbf{R} $ (see Section 4.2), as the values of structural features can be approximated as continuous data, making them suitable for continuous mathematical symbolic regression. In contrast, learning mathematical functions for semantic information is challenging due to the discrete and binary nature of both feature values and labels. Thus, to capture semantic information, we formulate the semantic function as a Boolean symbolic learning problem, i.e., employing our GESD framework to learn a boolean function $f^{sem}: \\\\mathbf{B}^{d} \\\\to \\\\mathbf{B} $ (see Section 4.2) that can accurately identify the effective nodes, where $\\\\mathbf{B} = \\\\lbrace 0,1 \\\\rbrace $ denotes the boolean feature domain.\", \"title\": \"Rebuttal by Authors\"}",
"{\"title\": \"We would greatly appreciate hearing your feedback.\", \"comment\": \"Dear Reviewer s1pC,\\n\\nWe would like to express our sincere gratitude once again for your positive feedback, insightful comments, and constructive suggestions. Your guidance has been invaluable in helping us improve the quality of our work!\\n\\nWe are writing to gently remind you that **the author-reviewer discussion period will end in less than 36 hours**. We eagerly await your feedback to **understand if our responses have adequately addressed your concerns**. **If so, we would deeply appreciate it if you could raise your score**. If not, we are eager to address any additional queries you might have, which will enable us to further enhance our work.\\n\\nOnce again, thank you for your kind support and constructive suggestions!\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"We would greatly appreciate hearing your feedback\", \"comment\": \"Dear Reviewer s1pC,\\n\\nWe would like to sincerely thank you once again for your positive feedback, insightful comments, and constructive suggestions. Your guidance has been instrumental in improving the quality of our work.\\n\\nAs the author-reviewer discussion period enters its final hours with **less than 5 hours** remaining, we wanted to kindly follow up regarding your feedback. We would greatly value your thoughts on **whether our responses have sufficiently addressed your concerns**. **If so, we would deeply appreciate it if you could consider reflecting this in your score**. If there are any remaining questions or concerns, we would be more than happy to provide further clarifications within the remaining time.\"}",
"{\"comment\": \"### Weakness 4\\n> **Provide more circuit dataset descriptions, e.g., graph sizes and graph visualizations.**\\n\\nWe provide **detailed statistics of circuits from two open-source benchmarks and one industrial benchmark in Tables 11, 12, 13, and 14 of the initial submission**. Moreover, we supplement a new subsection, **Appendix D.4 (Line 910 and Figure 7)--- \\\"Visualization of the Circuit Graph\\\"---in the first revision for graph visualizations**. \\n\\nSpecifically, these circuit statistics include information about PIs (Primary Inputs), POs (Primary Outputs), Latches, Nodes (the number of graph nodes), Edges (the number of graph edges), Cubes, and Lev (Level). **In this paper, we use Nodes, Edges, and Lev to represent the size of the graph.** For your convenience, we provide the meanings of the graph information and statistics in Tables 11, 12, 13, and 14 below.\\n- The fanins of a node are the nodes providing input to it, whereas the fanouts are the nodes it drives. \\n- Primary Inputs (PIs) are nodes with no fanins, and Primary Outputs (POs) are a subset of the network\\u2019s nodes. \\n- Latches are specialized nodes used in sequential circuits. \\n- **Nodes correspond to logic gates in the boolean network, while Edges represent the wires connecting them.** \\n- Cubes represent subsets of input variables in Boolean functions. \\n- **Lev refers to the depth of the directed acyclic graph (DAG)**, measured as the maximum number of edges between the PIs and POs.\\n\\n\\n\\n**Table 11**\\uff1a A detailed description of circuits from the EPFL benchmark. Nodes denote the sizes of circuits, and Lev denotes the depths of circuits.\\n| Circuit | PI | PO | Latch | Nodes | Edge | Cube | Lev |\\n|------------------------|-------|-------|-------|--------|--------|--------|-------|\\n| Adder | 256 | 129 | 0 | 1020 | 2040 | 1020 | 255 |\\n| Barrel shifter | 135 | 128 | 0 | 3336 | 6672 | 3336 | 12 |\\n| Divisor | 128 | 128 | 0 | 57247 | 114494 | 57247 | 4372 |\\n| Hypotenuse | 256 | 128 | 0 | 214335 | 428670 | 214335 | 24801 |\\n| Log2 | 32 | 32 | 0 | 32060 | 64120 | 323060 | 444 |\\n| Max | 512 | 130 | 0 | 2865 | 5730 | 2865 | 287 |\\n| Multiplier | 128 | 128 | 0 | 27062 | 54124 | 27062 | 274 |\\n| Sin | 24 | 25 | 0 | 5416 | 10832 | 5416 | 225 |\\n| Square-root | 128 | 64 | 0 | 24618 | 49236 | 24618 | 5058 |\\n| Square | 64 | 128 | 0 | 18486 | 36969 | 18485 | 250 |\\n| Round-robin arbiter | 256 | 129 | 0 | 11839 | 23678 | 11839 | 87 |\\n| Alu control unit | 7 | 26 | 0 | 175 | 348 | 174 | 10 |\\n| Coding-cavlc | 10 | 11 | 0 | 693 | 1386 | 693 | 16 |\\n| Decoder | 8 | 256 | 0 | 304 | 608 | 304 | 3 |\\n| i2c controller | 147 | 142 | 0 | 1357 | 2698 | 1356 | 20 |\\n| Int to float converter| 11 | 7 | 0 | 260 | 520 | 260 | 16 |\\n| Memory controller | 1204 | 1230 | 0 | 47110 | 93945 | 47109 | 114 |\\n| Priority encoder | 128 | 8 | 0 | 978 | 1956 | 978 | 250 |\\n| Lookahead XY router | 60 | 30 | 0 | 284 | 514 | 257 | 54 |\\n| Voter | 1001 | 1 | 0 | 13758 | 27516 | 13758 | 70 |\"}",
"{\"title\": \"We would greatly appreciate hearing your feedback\", \"comment\": \"Dear Reviewer s1pC,\\n\\nWe would like to sincerely thank you once again for your positive feedback, insightful comments, and constructive suggestions. Your guidance has been invaluable in enhancing the quality of our work!\\n\\nAs the author-reviewer discussion period is approaching its conclusion with **less than 12 hours** remaining, we wanted to kindly follow up regarding your feedback. We are eager to hear your thoughts on **whether our responses have sufficiently addressed your concerns**. **If they have, we would greatly appreciate it if you could consider reflecting this in your score.** If there are any remaining questions or concerns, we would be more than happy to provide further clarifications within the remaining time.\\n\\nThank you again for your support and thoughtful guidance throughout this process. We deeply value your time and effort in reviewing our work.\\n\\nBest\\uff0c\\n\\nAuthors\"}",
"{\"title\": \"We would love to hear your feedback\", \"comment\": \"Dear Reviewer kVA7,\\n\\nWe greatly appreciate your careful reading and constructive comments! We sincerely hope that our rebuttal **has properly addressed all your concerns**, including **revisions to clarify the Experiment section** (see *Weakness 3 and Question 4*), **addtitional generalization results** (see *Question 1 and Question 2*), and **explanation for our label collection strategy** (see *Weakness 2 and Question 3*). Item-by-item responses to your comments are provided above this response for your reference.\\n\\nAs the deadline for the author-reviewer discussion period is approaching (due on Novemeber 27), and **we are looking forward to your feedback and/or questions**! We would deeply appreciate it if you could raise your score if our rebuttal has addressed your concerns. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission.\\n\\nBest,\\n\\nAuthors\"}",
"{\"summary\": \"This paper studies the problem of circuit synthesis (CS) via graph-based methods that can generalize to unseen circuits. The proposed method, CMO, combines symbolic function learning with Graph Neural Networks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written with good introduction to the domain especially for ML-audience less familiar with hardware design.\\n2. Choice of benchmarks and state-the-art heuristics seem to be solid with comprehensive evaluation.\\n3. The proposed method achieves significant speedup while maintaining optimization performance on real circuits.\", \"weaknesses\": \"1. Theoretical justification and analysis are lacking \\u2013 It seems combining GNN, MCTS, symbolic learning etc. leads to better results on these CS benchmarks, yet some deeper explanation and analysis can be provided to make the paper stronger.\\n2. Some of the writings can be improved, e.g. \\u201cHowever, this approach cannot capture effective information from specific circuit distribution for higher generalization performance\\u201d \\u2013 Is it due to the human-designed nature and lack of adoption of machine learning from existing data?\\n3. Some technical errors, e.g. \\u201cSpecifically, we use mean absolute error and focal loss\\u201d yet the equation (4) is an MSE loss.\\n4. More circuit dataset descriptions, e.g. graph sizes, and graph visualizations would provide a more solid background for ML audience.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Summary of our responses\", \"comment\": \"Dear Area Chair,\\n\\nWe are writing as the authors of the paper \\\"A Graph Enhanced Symbolic Discovery Framework for Efficient Logic Synthesis\\\" (ID: 7225). We would like to express our sincere gratitude for your dedication and support throughout the review process. \\n\\nThe reviewers rated our work as **8** Accept (Reviewer kVA7), **6 (raised to 7)** Weak Accept (Reviewer oLys), and **6** Weak accept (Reviewer s1pC), respectively.\\n\\nFor your convenience, we **have prepared a summary of our responses** and outlined how we have addressed the reviewers' concerns as follows. We sincerely hope that this summary will facilitate your review and lighten your workload. Thank you once again for your time and support. \\n\\n### Summary of our responses\\nOur paper has received encouraging positive feedbacks from the reviewers, such as **\\\"the paper is well-structured\\\"** (Reviewer s1pC), **\\\"a profound related work\\\"** (Reviewer s1pC), **\\\"well-written\\\"** (Reviewer oLyS), **\\\"Choice of benchmarks and heuristics are solid\\\"** (Reviewer oLyS), **\\\"achieves significant speedup\\\"** (Reviewer oLyS), **\\\"clearly introduced\\\"** (Reviewer kVA7) and **\\\"comprehensive experiments\\\"** (Reviewer kVA7 and s1pC).\\n\\n**Reviewer kVA7** has replied that \\\"My main concerns previously were the misused terms, vague details, and the lack of baselines. **Most of them have been addressed** in the rebuttal or revised in the manuscript by the authors. Thus, I decided to **raise my score to 7/8**. I hope the authors can retain these revisions in the manuscript and make them clear.\\\". \\n\\n**Reviewer oLys** has replied that \\\"I appreciate authors' response and would like to raise my **overall rating to 7**.\\\" While we regret that ICLR does not include a score of 7 in its rating scale, we humbly believe this indicates **the reviewer's propensity to accept the paper**.\\n\\n**Reviewer s1pC** has **provided many positive comments** on our work, such as \\\"well-structured\\\", \\\"profound related work\\\" and \\\"comprehensive expreiments\\\". While we regret not receiving a response to our follow-up, we humbly believe that we have effectively addressed the reviewer\\u2019s primary concerns. Below, we summarize how we have addressed the reviewer's feedback.\\n\\n- **Clarification for the interrelationships between the two Sections in method.** We have clarified that the two methods are presented **in a progressive relationship**. Section 4.2 provides a detailed explanation of the method introduced in Section 4.1. In the revised version, we\\u2019ve added both textual explanations and diagrams to clearly illustrate this progression.\\n\\n- **Explanation for the role and calculations flow of the score $s_i $.** We have explained that the score $s_i $ is calculated to **predict and prune ineffective nodes** in an unseen circuit, thereby **accelerating the CS heuristics**. In the revised version, we\\u2019ve added three detailed algorithms that outline the step-by-step calculation process of $s_i $.\\n\\nOnce again, thank you very much for your time and efforts throughout the review period.\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"We are looking forward to your further feedback\", \"comment\": \"Dear Reviewer oLys,\\n\\nWe sincerely appreciate your thoughtful engagement and your kind consideration of raising the score for our work\\u2014it truly encourages us and reinforces our confidence in the value of our contributions.\\n\\nAs mentioned earlier, ICLR\\u2019s scoring system **does not include a score of 7, and the next available score above 6 is 8**, which corresponds to an **\\\"accept\\\"** decision. With the discussion period nearing its conclusion in **less than 5 hours**, we wanted to kindly check if our responses have addressed your concerns effectively. **If so, we would be most grateful if you might consider reflecting this in your final assessment**. If you have any additional questions or suggestions, please don\\u2019t hesitate to let us know\\u2014we would be delighted to provide further clarifications promptly within the remaining time.\\n\\nThank you again for your valuable feedback and support throughout this process!\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"We would greatly appreciate hearing your feedback.\", \"comment\": \"Dear Reviewer kVA7,\\n\\nWe would like to express our sincere gratitude once again for your positive feedback, insightful comments, and constructive suggestions. Your guidance has been invaluable in helping us improve the quality of our work!\\n\\nWe are writing to gently remind you that **the author-reviewer discussion period will end in less than 36 hours**. We eagerly await your feedback to **understand if our responses have adequately addressed your concerns**. **If so, we would deeply appreciate it if you could raise your score**. If not, we are eager to address any additional queries you might have, which will enable us to further enhance our work.\\n\\nOnce again, thank you for your kind support and constructive suggestions!\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"We would greatly appreciate hearing your feedback\", \"comment\": \"Dear Reviewer kVA7,\\n\\nWe would like to sincerely thank you once again for your positive feedback, insightful comments, and constructive suggestions. Your guidance has been invaluable in enhancing the quality of our work!\\n\\nAs the author-reviewer discussion period is approaching its conclusion with **less than 12 hours** remaining, we wanted to kindly follow up regarding your feedback. We are eager to hear your thoughts on **whether our responses have sufficiently addressed your concerns**. **If they have, we would greatly appreciate it if you could consider reflecting this in your score.** If there are any remaining questions or concerns, we would be more than happy to provide further clarifications within the remaining time.\\n\\nThank you again for your support and thoughtful guidance throughout this process. We deeply value your time and effort in reviewing our work.\\n\\nBest\\uff0c\\n\\nAuthors\"}",
"{\"comment\": \"### Question 2\\n> **Can a GNN trained on one dataset generalize to another dataset?**\\n\\n**Yes, the GNN trained on one dataset can generalize to another dataset.** Specifically, we trained a GNN model using **all of the circuits from the IWLS benchmark** and tested it on five challenging circuits **from the EPFL benchmark**. The results in **Table d** show that **the GNN trained on one dataset successfully generalizes to another dataset for both Mfs2 and Resub heuristics.**\\n\\n**Tabel d:** We trained the models from IWLS benchmarks and generalize them to EPFL circuits. The results demonstrate that the GNN achieves high benchmark-generalization performance for both Mfs2 and Resub heuristics.\\n| | | | | | | |\\n|-----------------|----------|-------------|--------------|-------------|-------------|--------------|\\n| **Mfs2 heuristic** | **Circuits** | **Hyp** | **Log2** | **Multiplier** | **Sin** | **Square** |\\n| | Method | Recall | Recall | Recall | Recall | Recall |\\n| | COG | **0.85** | **0.88** | **0.87** | **0.79** | **0.58** |\\n| | Effisyn | 0.34 | 0.09 | 0.61 | 0.95 | 0.55 |\\n| | Random | 0.50 | 0.46 | 0.44 | 0.47 | 0.48 |\\n| **Resub heuristic** | **Circuits** | **Hyp** | **Log2** | **Multiplier** | **Sin** | **Square** |\\n| | Method | Recall | Recall | Recall | Recall | Recall |\\n| | COG | **0.87** | **0.80** | **0.87** | **0.81** | **0.89** |\\n| | Effisyn | 0.65 | 0.47 | 0.2 | 0.61 | 0.57 |\\n| | Ramdom | 0.5 | 0.48 | 0.58 | 0.54 | 0.47 |\\n\\n\\n### Question 4\\n> **The presentation of experiments is confusing. Please clarify which is the main experiment and which are ablation studies.**\\n\\nThanks for your valuable comments. **The main experiment is Experiment 1\\u2014Generalization and Efficiency Evaluation, while the ablation studies are presented in Experiment 3, comparing CMO with CMO without GESD and CMO without SFD and GESD.** We have clarified in Weakness 3.1, 3.2, and 3.3 how we revised our Experiment Section. Once again, we appreciate your insightful suggestions, which have greatly improved the clarity of our paper.\"}",
"{\"comment\": \"**Step 2. We empirically show that our GESD framework can enhance different kinds of symbolic learning methods.**\\nTo evaluate **whether GESD can effectively enhance symbolic learning methods**, we combine GESD with three different symbolic learning methods. Specifically, we select classical genetic programming-based SR method --- **GPLearn [2]**, a state-of-the-art (SOTA) deep learning-based SR method --- **DSR [3]**, and an MCTS-based SR method --- **CMO (ours)** as backend symbolic learning methods. The results in Table a show that **our GESD significantly improves the generalization performance of all symbolic learning methods on six challenging open-source circuits.**\\n\\n**Table a:** G-X means combining GNN and the X symbolic learning method. The results demonstrate the effectiveness of our GESD framework can effectively enhance different kinds of symbolic learning methods.\\n Open-source Circuits | Hyp | Multiplier | Square | Desperf | Ethernet | Conmax | Average \\n:--------------------:|:------:|:----------:|:------:|:-------:|:--------:|:------:|:--------:\\n Method | Recall | Recall | Recall | Recall | Recall | Recall | Recall \\n G-DSR | **0.90** | **0.92** | **0.87** | **0.85** | **0.98** | **0.84** | **0.89** \\n DSR | 0.20 | 0.11 | 0.46 | 0.76 | 0.72 |0.88 | 0.52 \\n G-GPLearn | **0.64** | **0.92** | **0.91** | **0.80** | **0.02** | **0.72** | **0.67** \\n GPLearn | 0.35 | 0.11 | 0.27 | 0.39 | 0.02 | 1.00 | 0.36 \\n CMO | **0.99** | **0.97** | **0.98** | **0.80** | **0.72** | **0.85** | **0.89** \\n CMO without GESD | 0.93 | 0.52 | 0.72 | 0.60 | 0.42 | 0.45 | 0.61 \\n\\n\\n**Step 3. We analyze how the symbolic learning method benefits from the GESD framework.**\\nOur GESD **enhances the generalization capability** of the symbolic function **by transferring the domain-invariant information (i.e., inductive bias) learned by the graph model into the symbolic function.** Specifically, we have shown in Step 1 that the teacher GNN achieves high generalization capability through capturing domain-invariant information from well-constructed circuit subgraphs. To verify whether our GESD effectively incorporates this information into the symbolic searching process, we compute the **KL divergence** between the soft labels generated by the teacher GNN and the outputs of the learned symbolic functions. The results in **Table b** demonstrate that **our GESD enables the student symbolic learning method to effectively learn and compensate for the missing inductive bias.** Therefore, the GESD helps address the circuit symbolic generalization problem.\\n\\n**Table b:** We compute the KL divergence between the teacher GNN model and two variants: our CMO and CMO without GESD. The results reveal that the KL divergence between the GNN and CMO is significantly smaller than that between the GNN and CMO without GESD, demonstrating GESD's effectiveness in distilling inductive bias.\\n| | Hyp | Multiplier | Square |\\n|----------------------|---------------|---------------|----------------|\\n| Method | KL divergence | KL divergence | KL divergence |\\n| CMO | **0.145** | **0.473** | **0.090** |\\n| CMO without GESD | 1.541 | 1.190 | 0.899 |\\n| |**Desperf** |**Ethernet** | **Conmax** |\\n| Method | KL divergence | KL divergence | KL divergence |\\n| CMO |**0.08** | **0.11** | **0.151** |\\n| CMO without GESD | 0.515 | 0.488 | 0.350 |\"}",
"{\"title\": \"We would greatly appreciate hearing your feedback.\", \"comment\": \"Dear Reviewer s1pC,\\n\\nWe are writing to kindly remind you that the deadline for the author-reviewer discussion period is fast approaching (**ending in two days** on December 2nd). We greatly value your feedback and are eager to hear your thoughts on whether our responses have sufficiently addressed your concerns. Thank you once again for your thoughtful comments and valuable support throughout this process.\\n\\nBest\\n\\nAuthors\"}",
"{\"comment\": \"**Table 12**: A detailed description of circuits from the IWLS benchmark.\\n| Circuit | PI | PO | Latch | Nodes | Edge | Cube | Lev |\\n|-----------------------------|-------|-------|--------|---------|---------|---------|-------|\\n| aes_core | 259 | 129 | 530 | 20797 | 40645 | 24444 | 28 |\\n| des_area | 240 | 64 | 128 | 5005 | 9882 | 5889 | 35 |\\n| des_perf | 234 | 64 | 8808 | 98463 | 180542 | 108666 | 28 |\\n| ethernet | 98 | 115 | 10544 | 46804 | 113378 | 72850 | 37 |\\n| i2c | 19 | 14 | 128 | 1147 | 2299 | 1375 | 15 |\\n| mem_ctrl | 115 | 152 | 1083 | 11508 | 26436 | 14603 | 31 |\\n| pci_bridge32 | 162 | 207 | 3359 | 16897 | 34607 | 23130 | 29 |\\n| pci_conf_cyc_addr_dec | 32 | 32 | 0 | 109 | 212 | 128 | 6 |\\n| pci_spoci_ctrl | 25 | 13 | 60 | 1271 | 2637 | 1557 | 19 |\\n| sasc | 16 | 12 | 117 | 552 | 1148 | 766 | 10 |\\n| simple_spi | 16 | 12 | 132 | 823 | 1694 | 1089 | 14 |\\n| spi | 47 | 45 | 229 | 3230 | 6904 | 4054 | 32 |\\n| steppermotordrive | 4 | 4 | 25 | 228 | 397 | 253 | 11 |\\n| systemcaes | 260 | 129 | 670 | 7961 | 18236 | 11648 | 44 |\\n| systemcdes | 132 | 65 | 190 | 3324 | 6304 | 3791 | 33 |\\n| tv80 | 14 | 32 | 359 | 7166 | 16280 | 9352 | 50 |\\n| usb_funct | 128 | 121 | 1746 | 12871 | 27102 | 16378 | 25 |\\n| usb_phy | 15 | 18 | 98 | 559 | 1001 | 638 | 12 |\\n| vga_lcd | 89 | 109 | 17079 | 124050 | 242332 | 146201 | 25 |\\n| wb_conmax | 1130 | 1416 | 770 | 29036 | 77185 | 39619 | 26 |\\n| wb_dma | 217 | 215 | 263 | 3495 | 7052 | 4496 | 26 |\\n\\n**Table 13**: A detailed description of two very large-scale circuits from the EPFL benchmark\\n| Circuit | PI | PO | Latch | Nodes | Lev |\\n|---------|------|------|-------|-----------|-----|\\n| twenty | 137 | 60 | 0 | 20732893 | 162 |\\n| sixteen | 117 | 50 | 0 | 16216836 | 140 |\\n\\n**Table 14**: A statistical description of 27 industrial circuits (23 training circuits and 4 testing circuits) from Huawei HiSilicon. \\n| Circuit Type | Metric | PI | PO | Latch | Nodes | Lev |\\n|--------------------|--------|----------|----------|-------|----------|-----------|\\n| **Training Circuits** | mean | 8410.5 | 5978.68 | 0 | 104229.4 | 55.95 |\\n| | max | 59974 | 29721 | 0 | 788288 | 104 |\\n| | min | 41 | 107 | 0 | 2775 | 18 |\\n| **Testing Circuits** | mean | 18540.67 | 18015 | 0 | 356111.2 | 103.33 |\\n| | max | 42257 | 33849 | 0 | 655243 | 185 |\\n| | min | 523 | 483 | 0 | 24778 | 40 |\\n\\nTo provide a more comprehensive understanding of the circuit graph, we have included a new subsection, **Appendix D.4 (Line 912 and Figure 7)---\\\"Visualization of the Circuit Graph\\\"---in the first revision**. In this subsection, **we visualize the Boolean network of a small circuit across different phases of circuit optimization and demonstrate how the CS heuristics drive the optimization process.** For your convenience, we provide the relevant context below:\\n\\n**Visualization of The Circuit Graph** In the CS stage, a circuit is usually modeled by a DAG. Common types of DAGs for CS include And-Inverter Graphs (AIGs) for pre-mapping optimization and K-Input Look-Up Tables (K-LUTs) for post-mapping optimization. In the pre-mapping optimization phase, an AIG is a DAG containing four types of nodes: the constant, PIs, POs, and two-input And (And2) nodes. A graph edge is either complemented or not. A complemented edge indicates that the signal is complemented. In the post-mapping optimization phase, a K-LUT is a DAG with nodes corresponding to Look-Up Tables and directed edges corresponding to wires. A Look-Up Table in a K-LUT is a digital memory that implements the Boolean function of the node. **To further illustrate the circuit graph, we visualize the AIG, K-LUT look-up table, and the circuit optimization process of a small circuit selected from IWLS2020 [5] in Figure 7.**\\n\\n[5]. Shubham Rai, et al. Logic synthesis meets machine learning: Trading exactness for generalization. In 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 1026\\u20131031. IEEE, 2021.\"}",
"{\"title\": \"We would love to hear your feedback\", \"comment\": \"Dear Reviewer s1pC,\\n\\nWe greatly appreciate your careful reading and constructive comments! We sincerely hope that our rebuttal **has properly addressed all your concerns**, including **visualizing in the text the specific interrelationships between the two method** (see *Weakness 1* in rebuttal), and **illustration of the calculations flow of $s_i $** (see *Weakness 2* in rebuttal). Item-by-item responses to your comments are provided above this response for your reference.\\n\\nAs the deadline for the author-reviewer discussion period is approaching (due on Novemeber 27), and **we are looking forward to your feedback and/or questions**! We would deeply appreciate it if you could raise your score if our rebuttal has addressed your concerns. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission.\\n\\nBest,\\n\\nAuthors\"}",
"{\"metareview\": \"This paper studies the problem of Logic Optimization, where the goal is to optimize circuits. The paper proposes a new data-driven symbolic learning framework (CMO). The proposed method trains a GNN and uses an MCTS-based symbolic regression method to generate symbolic scoring functions, ensuring both inference efficiency and generalization. Overall the reviewers found the paper to be well-written and the experiments are convincing. There were some concerns (such as misuse of terms) but most of them were addressed in the response period.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion periods the reviewers acknowledged that the new version has fixed some of their concerns.\"}",
"{\"title\": \"Appreciation for your feedback\", \"comment\": \"Dear Reviewer kVA7,\\n\\nWe are truly grateful for your kind support and the time you've taken to review and assess our work. Your wiilingness to consider raising the score means a great deal to us and gives us further confidence in the value of our contributions. According to your suggestions, we will retain the revisions in the manuscript to make them clear.\\n\\nBest,\\n\\nAuthors\"}",
"{\"comment\": \"**Step 4. We explain why we chose MCTS rather than other symbolic learning approaches, such as genetic algorithms and deep learning, as the backend function searching method.**\\nIn our CS task, we adopt an MCTS-based symbolic learning method **due to its superior training efficiency and robust search capabilities compared to other kinds of symbolic learning approaches.** Specifically, we compare our CMO with G-GPLearn and G-DSR in terms of the offline prediction recall and training time. The results in Table c demonstrate that **our CMO outperforms G-GPLearn in finding generalizable symbolic function**. Moreover, **although G-DSR achieves a comparable prediction recall to CMO, its training process is notably time-consuming.** Therefore, we select MCTS as the backend function searching method.\\n\\n**Table c:** The results demonstrate that our CMO achieves higher prediction recall than the genetic-based method (G-GPLearn) and shorter training time than a SOTA deep learning-based method (G-DSR) on average.\\n| | Hyp | | Multiplier | | Square | | | |\\n|-----------|----------|---------------|------------|---------------|--------|---------------|--------|----------------|\\n| | Recall | Training Time | Recall | Training Time | Recall | Training Time | | |\\n| G-DSR | 0.90 | 5926.10 | 0.92 | 10061.73 | 0.87 | 10085.49 | | |\\n| G-GPLearn | 0.64 | 915.43 | 0.92 | 1542.91 | 0.91 | 1526.16 | | |\\n| CMO | **0.99** | 2911.38 | **0.97** | 5427.95 | **0.98** | 5525.24 | | |\\n| | **Des_perf** | | **Ethernet** | | **Conmax** | | | **Average** |\\n| | Recall | Training Time | Recall | Training Time | Recall | Training Time | Recall | Training Time |\\n| G-DSR | **0.85** | 9593.55 | **0.98** | 11305.60 | 0.84 | 16926.33 | 0.89 | 10649.80 |\\n| G-GPLearn | 0.80 | 757.10 | 0.02 | 1452.16 | 0.72 | 1377.46 | 0.67 | 1261.87 |\\n| CMO | 0.80 | 1885.89 | 0.72 | 2249.19 | **0.85** | 4293.19 | **0.89** | **3715.47** |\\n\\n[1]. Zhihai Wang, et al. A circuit domain generalization framework for efficient logic synthesis in chip design. International Conference on Machine Learning, 2024.\\n\\n[2]. Pedro G Espejo, et al. A survey on the application of genetic programming to classification. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40(2):121\\u2013144, 2009.\\n\\n[3]. Brenden K Petersen, et al. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. In International Conference on Learning Representations, 2020.\\n\\n### Weakness 2.\\n> **Some of the writings can be improved, e.g. \\u201cHowever, this approach cannot capture effective information from specific circuit distribution for higher generalization performance\\u201d \\u2013 Is it due to the human-designed nature and lack of adoption of machine learning from existing data?**\\n\\nThanks for your valuable comments. Yes, the generalization capability of human-designed approaches is **constrained by their inherent nature and the absence of machine learning techniques to leverage existing data.** We have clarified and improved this statement in **Line 61 of the first revision**. For your convenience, we provide the updated text below:\\n\\nIn contrast, [4] proposes a human-designed hard-coded mathematical expression as the scoring function, which aligns with human intuition and is thus regarded to be reliable. **However, designing and developing these functions is extremely challenging as it requires extensive expert knowledge. Moreover, this function cannot achieve high generalization performance due to the lack of adoption of machine learning from existing data**, which could significantly degrade the QoR of the optimized circuits.\\n\\n[4]. Xing Li, et al. Effisyn: Efficient logic synthesis with dynamic scoring and pruning. In IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 2023.\\n\\n### Weakness 3\\n> **Some technical errors, e.g. \\u201cSpecifically, we use mean absolute error and focal loss\\u201d yet the equation (4) is an MSE loss.**\\n\\nThanks for your valuable comments. We have corrected the term \\\"mean absolute error\\\" to \\\"mean squared error\\\" in **Line 1090 of the first revision** to align with Equation (4). We appreciate your attention to detail, which helps improve the accuracy and clarity of our work.\"}",
"{\"comment\": \"### Weakness 2.\\n> **What is the role of si in sections 4.1 and 4.2 and what is the flow of the calculations for si?**\\n \\n$\\\\textbf{s} = \\\\lbrace s_i \\\\rbrace \\\\_{i=1}^N $ are **calculated to predict ineffective nodes on a circuit with $N $ nodes and avoid transformation on these nodes to accelerate the CS heuristics in the online phase**. The lower the score, the higher the probability that the node is ineffective. \\n\\nThe calculations of $\\\\textbf{s} $ consist of two steps: **(a). collect training dataset $\\\\mathcal{D} = \\\\lbrace \\\\textbf{x}\\\\_i, y_i \\\\rbrace_{i=1}^N $ and train a pair of structural function $f_{str} $ and semantic function $f_{sem} $ in the offline phase; (b). leveraging the symbolic functions to calculate the score $s_i = f_{str}(x_i^{str}) + w * f_{sem}(x_i^{sem}) $ for all nodes $x_i^{test} = [x_i^{str}, x_i^{sem}] $ on an unseen circuit in the online phase.** The detailed offline training and online score calculation algorithm are provided **in Algorithms 1, 2, and 3 of our first revision**. For your convenience, we also provide the Python pseudocode for them below:\\n\\n```python\\n# The offline training algorithm\\ndef training(D, f_GNN):\\n \\\"\\\"\\\"\", \"input\": \"test_dataset D_test\\n structural_function f_str\\n semantic_function f_sem\", \"output\": \"\", \"scores_s\": \"final scores for all nodes in the test dataset\\n \\\"\\\"\\\"\\n\\n # Step 1: Separate the initial dataset into structural and semantic data\\n D_str = extract_structural_data(D_test) # Extract structural data from D\\n D_sem = extract_semantic_data(D_test) # Extract semantic data from D\\n\\n # Step 2: Calculate structural and semantic scores\\n s_str = [\\n f_str(x_str) for x_str, _ in D_str\\n ]\\n semantic_scores = [\\n f_sem(x_sem) for x_sem, _ in D_sem\\n ]\\n\\n # Step 3: Calculate the weight as the median of structural scores\\n w = median(s_str)\\n\\n # Step 4: Calculate final scores\\n s = [\\n s_str^i + weight * s_sem^i\\n for s_str, s_sem in zip(s_str, s_sem)\\n ]\\n\\n return final_scores\\n```\"}",
"{\"title\": \"We eagerly await your feedback\", \"comment\": \"Dear Reviewer s1pC,\\n\\nWe are writing to gently remind you that **the deadline for the author-reviewer discussion period is approaching** (due on December 2nd). We eagerly await your feedback to understand if our responses have adequately addressed all your concerns. *If so, we would deeply appreciate it if you could raise your score*. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. We sincerely thank you once more for your insightful comments and kind support.\\n\\nBest,\\n\\nAuthors\"}",
"{\"comment\": \"### Weakness 3.2\\n> **Experiments on generalization should be highlighted in the main part of the manuscript.**\\n\\nThanks for your valuable comments. According to your suggestion, we have revised the experiment section **in the first revision (Line 454)** by **removing the generalization results in Experiment 4** and **organizing all the generalization comparison results in Experiment 1** to better emphasize the generalization evaluation in the main part. For your convenience, we summarize the adjusted experiment sections as follows:\\n- **Experiment 1. To demonstrate the superior performance of our CMO in terms of generalization performance and efficiency.**\\n- Experiment 2. To demonstrate that our approach can not only prompt the efficiency of the Mfs2 heuristic but also improve the Quality of Results (QoR).\\n- Experiment 3. Perform carefully designed ablation studies to provide further insight into CMO.\\n- **Experiment 4. To show the appealing features of CMO in terms of ~~generalization capability~~ per inference efficiency and interpretability.**\\n\\n### Weakness 3.3\\n> **Experiment 4 should showcase generalization compared to other baselines like COG, but why other SR methods? Is this an ablation study of the SR method used?**\\n\\nWe sincerely apologize for the confusion caused by our experiment setting. **The five lightweight methods** discussed in Experiment 4 are **baselines for generalization and efficiency evaluation in Experiment 1, rather than being part of the ablation studies**. \\n\\nSpecifically, we have mentioned in 'Evaluation Metrics and Evaluated Methods' that, **in the offline and online phase, we compare our CMO with the other five lightweight baselines (Line 372 in the initial submission)** to provide a more comprehensive comparison. However, these comparisons were not appropriately placed in the correct section of the Experiment.\\n\\n\\nTo clarify this misunderstanding, we have **removed the \\\"High Generalization Performance\\\" section from Experiment 4** and incorporated the comparison results with these lightweight methods into Experiment 1. The updated results for the generalization and online heuristics efficiency comparisons are now presented in **Tables 8 and 9 in the Appendix of the first revision.** For your convenience, we have included the tables below:\\n\\n**Table 8**: We compare our CMO with five lightweight baselines. The results demonstrate that our approach outperforms all of the baselines in terms of generalization capability.\\n| | Hyp | Multiplier | Square | Desperf | Ethernet | Conmax |\\n|------------|--------|------------|--------|---------|----------|--------|\\n| Method | Recall | Recall | Recall | Recall | Recall | Recall |\\n| SPL | 0.93 | 0.52 | 0.72 | 0.60 | 0.42 | 0.45 |\\n| DSR | 0.20 | 0.11 | 0.46 | 0.76 | 0.72 | 0.88 |\\n| XGBoost | 0.91 | 0.86 | 0.46 | 0.79 | 0.33 | 0.68 |\\n| RidgeLR | 0.81 | 0.62 | 0.88 | 0.79 | 0.33 | 0.54 |\\n| Random | 0.50 | 0.44 | 0.48 | 0.50 | 0.47 | 0.50 |\\n| CMO (Ours) | **0.99** | **0.97** | **0.98** | **0.80** | **0.72** | **0.84** |\"}",
"{\"summary\": \"This paper proposes a method called CMO to develop lightweight and generalizable scoring functions for ranking nodes in an AIG, aiming to enhance the efficiency and performance of logic optimization. The method is clearly introduced. The paper trains a GNN and uses an MCTS-based symbolic regression method to generate symbolic scoring functions, ensuring both inference efficiency and generalization. However, some experimental details remain unclear.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The method is clearly introduced, with comprehensive experiments demonstrating the effectiveness and performance of CMO.\", \"weaknesses\": \"### Topic\\nThe term \\\"circuit synthesis\\\" is not well-defined in the field of EDA, which may cause confusion. Based on the related works and experiments, this paper appears to focus on logic optimization.\\n### Datasets\\nThe labels of circuit datasets should be clarified. When mentioning node-level transformation, does it mean it is effective for the current step of logic optimization or for overall performance? Effectiveness in the current step may not translate to overall performance in logic optimization.\\n### Experiments\\nThe experiment part is somewhat confusing. The focus of logic optimization should be on time cost and node reduction during the online phase. The offline phase appears more like an ablation study. Experiments on generalization should be highlighted in the main part of the manuscript. Experiment 4 should showcase generalization compared to other baselines like COG, but why other SR methods? Is this an ablation study of the SR method used?\\n### Generalization\\nEPFL, IWLS, and an industrial-level dataset from Huawei HiSilicon are used to train the GNN. Are the datasets mixed to train a single GNN, or are three separate GNNs trained for each dataset?\", \"questions\": \"1. Can CMO generalize to other logic optimization methods like Rewrite?\\n2. Can a GNN trained on one dataset generalize to another dataset?\\n3. How are the training dataset labels obtained?\\n4. The presentation of experiments is confusing. Please clarify which is the main experiment and which are ablation studies.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Thank you for your kind support and valuable feedback on our paper! We appreciate your insightful comments and constructive suggestions.\"}",
"{\"comment\": \"**Table 6:** We compare CMO with the GPU-based SOTA approach COG and the human-designed approach Effisyn. Specifically, we set the Top $k $ as $50 $% for the COG and CMO and $70 $% for the Effisyn. And Reduction (AR) denotes the reduced number of nodes (optimization performance). Normalized AR denotes the ratio of the AR to that of the default heuristic.\\n| **Hyp** | | | | **Multiplier** | | | | \\n|--------------|--------------------------|---------------------|----------------|--------------|--------------------------|---------------------|----------------| \\n| Method | And Reduction (AR) | Normalized AR | Time (s) | Method | And Reduction (AR) | Normalized AR | Time (s) | \\n| COG | 661.33 | 1.00 | 247.26 | COG | 21.00 | **0.95** | 18.14 | \\n| Effisyn | 662.00 | 1.00 | 270.50 | Effisyn | 18.00 | 0.82 | 16.17 | \\n| CMO (Ours)| 661.00 | **1.00** | **213.39** | CMO (Ours) | 20.00 | 0.91 | **14.32** | \\n| **Square** | | | | **Desperf** | | | | \\n| Method | And Reduction (AR) | Normalized AR | Time (s) | Method | And Reduction (AR) | Normalized AR | Time (s) | \\n| COG | 8.00 | **1.00** | 16.11 | COG | 890.67 | 0.80 | 29.97 | \\n| Effisyn | 1.00 | 0.13 | 16.73 | Effisyn | 895.00 | 0.80 | 26.59 | \\n| CMO (Ours)| 7.33 | 0.92 | **12.54** | CMO (Ours) | 983.00 | **0.95** | **22.38** | \\n| **Ethernet** | | | | **Conmax** | | | | \\n| Method | And Reduction (AR) | Normalized AR | Time (s) | Method | And Reduction (AR) | Normalized AR | Time (s) | \\n| COG | 27.33 | 0.72 | 20.01 | COG | 730.67 | **0.93** | 19.03 | \\n| Effisyn | 27.00 | 0.71 | 32.85 | Effisyn | 704.00 | 0.90 | 25.29 | \\n| CMO (Ours)| 30.67 | **0.82** | **16.25** | CMO (Ours) | 703.33 | 0.90 | **15.27** | \\n| **Ci1** | | | | **Ci2** | | | | \\n| Method | And Reduction (AR) | Normalized AR | Time (s) | Method | And Reduction (AR) | Normalized AR | Time (s) | \\n| COG | 9.67 | 0.81 | 327.05 | COG | 15.50 | 0.67 | 61.89 | \\n| Effisyn | 9.00 | 0.75 | 275.58 | Effisyn | 19.00 | 0.83 | 59.14 | \\n| CMO (Ours)| 12.00 | **1.00** | **255.90** | CMO (Ours) | 23.00 | **1.00** | **45.61** | \\n| **Ci3** | | | | **Ci4** | | | | \\n| Method | And Reduction (AR) | Normalized AR | Time (s) | Method | And Reduction (AR) | Normalized AR | Time (s) | \\n| COG | 1040.00 | 1.00 | 145.12 | COG | 98.00 | 0.99 | 161.19 | \\n| Effisyn | 1040.00 | 1.00 | 126.31 | Effisyn | 99.00 | **1.00** | 84.29 | \\n| CMO (Ours)| 1040.00 | **1.00** | **113.08** | CMO (Ours) | 96.00 | 0.97 | **109.42** | \\n| **Sixteen** | | | | **Twenty** | | | | \\n| Method | And Reduction (AR) | Normalized AR | Time (s) | Method | And Reduction (AR) | Normalized AR | Time (s) | \\n| COG | 1031.00 | **0.94** | 61905.96 | COG | 1291.00 | **0.95** | 86112.80 | \\n| Effisyn | 9.00 | 0.01 | 47078.07 | Effisyn | 9.00 | 0.01 | 73853.64 | \\n| CMO (Ours)| 1000.00 | 0.91 | **32001.27** | CMO (Ours) | 1251.00 | 0.92 | **56965.94** |\"}",
"{\"comment\": \"**Table 9**: The results demonstrate that our approach outperforms all of the lightweight baselines in terms of online heuristics efficiency and optimization performance.\\n\\n| Hyp | | | | Multiplier | | | |\\n|------------|--------------------|---------------|-------------|------------|--------------------|---------------|--------------|\\n| Method | And Reduction (AR) | Normalized AR | Time (s) | Method | And Reduction (AR) | Normalized AR | Time (s) |\\n| SPL | 659.33 | 0.99 | 234.88 | SPL | 20.67 | 0.91 | 15.63 |\\n| DSR | 527.67 | 0.79 | 257.61 | DSR | 4.00 | 0.18 | 14.41 |\\n| XGBoost | 650.00 | 0.98 | 246.79 | xgboost | 20.00 | 0.91 | 14.28 |\\n| RidgeLR | 646.00 | 0.97 | 228.22 | RidgeLR | 20.00 | 0.91 | 11.52 |\\n| Random | 374.33 | 0.57 | 228.51 | Random | 14.00 | 0.64 | 13.74 |\\n| CMO (Ours) | 661.00 | **1.00** | **213.39** | CMO (Ours) | 20.00 | **0.91** | **14.32** |\\n| **Square** | | | | **Desperf** | | | |\\n| Method | And Reduction (AR) | Normalized AR | Time (s) | Method | And Reduction (AR) | Normalized AR | Time (s) |\\n| SPL | 5.33 | 0.67 | 14.17 | SPL | 927.67 | 0.83 | 31.27 |\\n| DSR | 1.00 | 0.13 | 20.63 | DSR | 865.00 | 0.77 | 26.42 |\\n| XGBoost | 1.00 | 0.13 | 19.73 | xgboost | 1026.00 | 0.92 | 29.97 |\\n| RidgeLR | 3.00 | 0.38 | 14.90 | RidgeLR | 942.00 | 0.84 | 33.26 |\\n| Random | 3.67 | 0.46 | 17.82 | Random | 790.00 | 0.71 | 29.42 |\\n| CMO (Ours) | 7.33 | **0.92** | **12.54** | CMO (Ours) | 983.00 | **0.95** | **22.38** |\\n| **Ethernet** | | | | **Conmax** | | | |\\n| Method | And Reduction (AR) | Normalized AR | Time (s) | Method | And Reduction (AR) | Normalized AR | Time (s) |\\n| SPL | 17.67 | 0.46 | 32.39 | SPL | 681.67 | 0.87 | 25.25 |\\n| DSR | 31.00 | 0.82 | 28.49 | DSR | 767.00 | 0.98 | 22.70 |\\n| XGBoost | 30.00 | 0.79 | 34.57 | xgboost | 751.00 | 0.96 | 23.28 |\\n| RidgeLR | 18.00 | 0.47 | 34.49 | RidgeLR | 638.00 | 0.82 | 24.96 |\\n| Random | 21.00 | 0.55 | 28.96 | Random | 557.67 | 0.71 | 22.31 |\\n| CMO (Ours) | 30.67 | **0.82** | **16.25** | CMO (Ours) | 703.33 | **0.90** | **15.27** |\"}",
"{\"comment\": \"I appreciate the effort made by the authors!\\n\\nMy main concerns previously were the misused terms, vague details, and the lack of baselines. Most of them have been addressed in the rebuttal or revised in the manuscript by the authors. \\n\\nThus, I decided to raise my score to 7/8. I hope the authors can retain these revisions in the manuscript and make them clear.\"}",
"{\"title\": \"We would love to hear your feedback\", \"comment\": \"Dear Reviewer oLyS,\\n\\nWe greatly appreciate your careful reading and constructive comments! We sincerely hope that our rebuttal **has properly addressed all your concerns**, including **deeper explanation and theoretical analysis on why combining GNN, MCTS, and symbolic learning leads to better results** (see *Weakness 1* in rebuttal), and **More circuit dataset descriptions** (see *Weakness 4* in rebuttal). Item-by-item responses to your comments are provided above this response for your reference.\\n\\nAs the deadline for the author-reviewer discussion period is approaching (due on Novemeber 27), and **we are looking forward to your feedback and/or questions**! We would deeply appreciate it if you could raise your score if our rebuttal has addressed your concerns. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission.\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"We would greatly appreciate hearing your feedback\", \"comment\": \"Dear Reviewer s1pC,\\n\\nWe sincerely thank you once again for your positive feedback, insightful comments, and constructive suggestions. Your guidance has been invaluable in enhancing the quality of our work.\\n\\nAs the author-reviewer discussion period enters its final stage, with **less than two hours** remaining, we wanted to kindly follow up on your feedback. We would greatly appreciate your thoughts on **whether our responses have adequately addressed your concerns**. **If they have, we would be truly grateful if you could consider reflecting this in your score**. Should you have any remaining questions or concerns, we would be more than happy to provide further clarifications within the remaining time.\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Thank you for your kind support and valuable feedback on our paper! We appreciate your insightful comments and constructive suggestions.\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Thank you for your kind support and valuable feedback on our paper! We appreciate your insightful comments and constructive suggestions.\"}",
"{\"title\": \"Appreciation for Your Feedback\", \"comment\": \"Dear Reviewer oLys,\\n\\nWe are truly grateful for your kind support and the time you've taken to review and assess our work. Your willingness to consider raising the score means a great deal to us and gives us further confidence in the value of our contributions.\\n\\nWe would like to gently remind you that **ICLR's scoring system does not include a score of 7, and the next available score above 6 is 8**. In other AI conferences such as NeurIPS and ICML, **a score of 7** typically corresponds to an **\\\"accept\\\"**, which **is equivalent to a score of 8 at ICLR**. Therefore, **if our responses have adequately addressed your concerns, we sincerely hope you might consider raising the score to 8 (accept)** within the available range. We summarize our responses below.\\n\\n- **Theoretical and empirical analysis of combining GNN, MCTS, and symbolic learning.** We provide both theoretical and empirical evidence to justify the integration of GNN, MCTS, and symbolic learning in our method. **Theoretically**, we demonstrate that **leveraging multi-domain circuit training datasets** significantly **reduces the generalization error bound** of the teacher GNN model. **Empirically**, we show that the Graph Enhanced Symbolic Discovery (GESD) framework, which combines the teacher GNN with a student symbolic learning model, enhances the generalization ability of the symbolic function by **transferring domain-invariant knowledge (i.e., inductive bias)** from the graph model to the symbolic function. **Additionally**, our experiments highlight the **superior performance of MCTS** over classical symbolic learning methods **in terms of both generalization capability and training efficiency**. These analyses illustrate the rationale for integrating GNN, MCTS, and symbolic learning. Moreover, **this combination enables our method to discover a generalizable, lightweight, and interpretable symbolic function.**\\n\\n- **Some of the writings can be improved.** We sincerely appreciate your valuable feedback on improving the clarity of our writing. **We have carefully revised the unclear statements in the revised version.**\\n\\n- **Technical errors.** We sincerely appreciate you for pointing out the technical errors and **we have carefully addressed and corrected in the revised version**.\\n\\n- **More circuit dataset descriptions and graph visualization.** We have included **detailed statistics for circuits** from two open-source and one industrial benchmark in Tables 11\\u201314 of the rebuttal. Additionally, we added **graph visualizations in the revised version** under Appendix D.4 (Line 910, Figure 7) titled \\\"Visualization of the Circuit Graph.\\\"\\n\\nOnce again, thank you for your constructive suggestions and for considering our work so thoughtfully. Your feedback has been instrumental in helping us improve!\\n\\nBest, \\n\\nAuthors\"}",
"{\"comment\": \"# Response to Reviewer kVA7\\nWe thank the reviewer for the insightful and valuable comments. We respond to each comment as follows and sincerely hope that our rebuttal can properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. \\n\\n### Weakness 1\\n> **The term \\\"circuit synthesis\\\" is unclear in EDA and may cause confusion; this paper seems to focus on logic optimization based on related works and experiments.**\\n\\nThanks for your valuable comments. In our initial draft, **we followed previous works [1][2][3][4]** in adopting the term \\\"Circuit Synthesis\\\" as a more accessible alternative to \\\"Logic Synthesis\\\" to enhance readability for non-experts in the field.\\\" In general, Logic Synthesis consists of three main stages: translation, logic optimization, and technology mapping. As you insightfully mentioned, our work mainly focuses on logic optimization and it is more proper to use logic optimization rather than circuit synthesis. Therefore, to ensure accuracy and clarity, **we have replaced \\\"Circuit Synthesis\\\" with \\\"Logic Optimization\\\" in the first revision.**\\n\\n[1]. Wang Z et al. Towards Next-Generation Logic Synthesis: A Scalable Neural Circuit Generation Framework. The Thirty-eighth Annual Conference on Neural Information Processing Systems. 2024.\\n\\n[2]. Chowdhury, et al. Openabc-d: A large-scale dataset for machine learning guided integrated circuit synthesis. arXiv preprint arXiv:2110.11292, 2021.\\n\\n[3]. Scarabottolo I, Ansaloni G, Constantinides G A, et al. Approximate logic synthesis: A survey. Proceedings of the IEEE, 2020, 108(12): 2195-2213.\\n\\n[4]. Buch, et al. Logic synthesis for large pass transistor circuits. 1997 Proceedings of IEEE International Conference on Computer Aided Design (ICCAD). IEEE, 1997: 663-670.\\n\\n### Weakness 2 & Question 3\\n> **How are the training dataset labels obtained? When mentioning node-level transformation, does it mean it is effective for the current step of logic optimization or for overall performance? Effectiveness in the current step may not translate to overall performance in logic optimization**\\n\\nThe node label $y $ is collected based on the effectiveness of the node-level transformation. **If the node-level transformation is effective at the node $\\\\textbf{x} $, then $y=1 $. Otherwise, $y=0 $.** \\n\\nThe effectiveness of a node-level transformation is determined by whether the heuristic **can optimize the current local subgraph (i.e., reducing the subgraphs nodes)**. While some locally effective nodes may influence overall optimization performance, others may not. However, the proportion of **locally effective nodes that fail to contribute to overall performance is sufficiently small**, so labeling these nodes as positive **has no significant impact efficiency**. Moreover, applying node-level transformations on these nodes **will not degrade the final optimization performance**. Therefore, it is reasonable and simple to label nodes based on the local effectiveness of the node-level transformations.\\n\\n### Weakness 3.1\\n> **The focus of logic optimization should be on time cost and node reduction during the online phase. The offline phase appears more like an ablation study.** \\n\\nThanks for your valuable comments. Our CMO framework consists of two phases: the offline phase and the online phase. In the offline phase, we formulate the logic optimization (LO) task **as a machine learning (ML) problem**---learning a model from the training dataset that can accurately predict ineffective nodes on unseen circuits. **The offline metric**---prediction recall, which serves as a proxy for the online optimization performance---**is used to access the generalization capability of the learned models from the ML perspective**. In the online phase, the CMO uses the learned model to predict ineffective nodes on unseen circuits and avoid transformations on these nodes to accelerate the X heuristic. **The online generalization metrics**---runtime and node reduction---**are used to evaluate the efficiency and generalization optimization performance of the heuristic X-Mfs2 from th EDA perspective**.\\n\\nIn Experiment 1, the offline results are used to show the superior generalization performance of our CMO. In terms of online results, **we mainly focus on time cost and node reduction** as you suggested. Specifically, **We present the online results in Figure 4 and Table 6 in the initial submission.** Table 6 is provided in Appendix C.3 due to limited space. For your convenience, we have included Table 6 below. The results show that **our CMO significantly outperforms the baselines in terms of efficiency while achieving comparable optimization performance with the GPU-based SOTA methods COG.**\", \"title\": \"Rebuttal by Authors\"}",
"{\"title\": \"We would greatly appreciate hearing your feedback\", \"comment\": \"Dear Reviewer kVA7,\\n\\nWe are writing to kindly remind you that the deadline for the author-reviewer discussion period is fast approaching (**ending in two days** on December 2nd). We greatly value your feedback and are eager to hear your thoughts on whether our responses have sufficiently addressed your concerns. Thank you once again for your thoughtful comments and valuable support throughout this process.\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"Thank you\", \"comment\": \"I appreciate authors' response and would like to raise my overall rating to 7.\"}",
"{\"summary\": \"This authors propose a novel data-driven circuit symbolic learning framework, CMO. It learns a symbolic scoring function balancing inference efficiency, interpretability, and generalization performance. While existing approaches often struggle with these trade-offs in modern circuit synthesis (CS) tools, CMO demonstrates superior capability in discovering lightweight and interpretable symbolic functions from a decomposed symbolic space. The major technical contribution of CMO is the Graph Enhanced Symbolic Discovery (GESD) framework, which employs a specially designed Graph Neural Network (GNN) to guide the generation of symbolic trees. CMO is the first graph-enhanced approach for discovering lightweight and interpretable symbolic functions that effectively generalize to unseen circuits in CS.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Overall, the proposed work is well-structured with a profound related work.\\n2. This paper proposes a novel circuit symbolic learning framework to learn efficient, interpretable, and generalizable symbolic functions that are reliable and simple to deploy in modern CS tools.\\n3. CMO is the first graph-enhanced approach for discovering lightweight and interpretable symbolic functions that can well generalize to unseen circuits in CS. \\n4. Extensive experimental results show the effectiveness of the proposed CMO over existing works.\", \"weaknesses\": \"The link between the two methods in sections 4.1 and 4.2 needs to be further elucidated, and it is not currently possible to visualize in the text the specific interrelationships between the two methods. For example, what is the role of si in section 4.1 in section 4.2 and what is the flow of the calculations for si.\", \"questions\": \"Please check weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"# Response to Reviewer oLyS\\nWe thank the reviewer for the insightful and valuable comments. We respond to each comment as follows and sincerely hope that our rebuttal can properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. \\n\\n### Weakness 1.\\n> **Provide some deeper explanation and theoretical analysis on why combining GNN, MCTS, and symbolic learning leads to better results.**\\n\\nThanks for your valuable comments. To offer a deeper explanation of why the combination of GNN, MCTS, and symbolic learning leads to improved results, we break the analysis down into the following four steps:\\n\\n**Step 1. We provide a theoretical analysis of how the teacher GNN model addresses the circuit generalization problem in our CS task.**\\n \\nThe GNN addresses the circuit generalization problem by **designing multi-domain circuit datasets for training** and **learning domain-invariant information (i.e., inductive bias) from well-constructed subgraphs.** Specifically, [1] first discovers that the large distribution shift between different circuits makes it challenging for models trained on training circuits to generalize to the unseen circuits. To address this problem, [1] provides a theorem **for the generalization error bound** between the average risk $\\\\mathcal{R}(f) $ over all possible target circuit domains and the empirical risk estimation objective $\\\\hat{\\\\mathcal{R}}(f) $ on the training circuit domains. A circuit domain refers to the underlying distribution from which circuits are sampled.\\n\\n**Theorem 1**: Under some mild and reasonable assumptions, the following inequality holds with probability at least 1 - $\\\\delta $\\n\\n$$\\\\left(\\\\sup\\\\_{f}\\\\lvert \\\\mathcal{R}(f)-\\\\hat{\\\\mathcal{R}}(f) \\\\rvert \\\\right)^2 \\\\leq \\\\vphantom{\\\\frac{C_4}{M^2}\\\\sum_{k=1}^M\\\\frac{1}{n_k}}\\\\frac{C_1 \\\\log \\\\delta^{-1}+C_2}{M} + \\\\frac{C_3\\\\log 2\\\\delta^{-1}M+C_4\\\\log \\\\delta^{-1}+C_5}{M^2}\\\\sum_{k=1}^M\\\\frac{1}{n_k} $$\\n\\nwhere $C_1, C_2, C_3, C_4, C_5 $ are constants, $M $ is the number of training circuit domains and $n_k $ is the sample size of the $k $-th training circuit domain. In our CS task, the total number of the training samples $n = \\\\sum_{k=1}^M{n_k} $ is a constant. Based on Theorem 1, we derive the following corollary:\\n\\n**Corollary 1**: Under the condition $n_k = \\\\frac{n}{M} \\\\text{ for } k = 1, 2, \\\\cdots, M $ and $1 \\\\leq M \\\\leq \\\\frac{C_1\\\\log\\\\delta^{-1}+C_2} {C_3\\\\log2\\\\delta^{-1}} \\\\cdot n $, **using domain-wise training circuit datasets (i.e., M > 1) will result in a smaller generalization error bound than just pooling them in one mixed dataset (i.e., M=1).** The proof is provided below:\\n\\nWe represent the generalization error bound in Theorem 1 as a function on discrete variable $M $ \\n \\\\begin{align}\\n B(M) = \\\\frac{C_1 \\\\log \\\\delta^{-1}+C_2}{M}+ \\\\frac{C_3\\\\log 2\\\\delta^{-1}M}{n}+\\\\frac{C_4\\\\log \\\\delta^{-1}+C_5}{n}\\\\quad(M \\\\geq 1)\\\\nonumber\\n \\\\end{align} \\nwhere $n $ representing the total number of samples is a constant and $M $ denotes the number of domains. To prove the corollary, we just need to prove that $B(1) \\\\geq B(M) \\\\text{ for } M \\\\geq 1 $. Consequently, under the condition, we have:\\n \\\\begin{align}\\n &1 \\\\leq M \\\\leq \\\\frac{C_1\\\\log\\\\delta^{-1}+C_2}{C_3\\\\log2\\\\delta^{-1}} \\\\cdot n \\\\\\\\\\\\\\\\\\n \\\\Rightarrow &\\\\frac{C_3\\\\log{2\\\\delta^{-1}}}{n}(1-M) - \\\\frac{C_1\\\\log\\\\delta^{-1}+C_2}{M}(1-M) \\\\geq 0 \\\\\\\\\\\\\\\\\\n \\\\Rightarrow&B(1) \\\\geq B(M) \\\\quad (M \\\\geq 1)\\\\nonumber\\n \\\\end{align}\\nBased on Collary 1, we **design a multi-domain circuits datasets to train the GNN model for enhanced generalization capability**. In this paper, circuits with **similar functionalities**, such as arithmetic, control, and memory, are grouped into distinct circuit domains.\\n\\n**Moreover, the GNN model achieves high generalization capability by learning domain-invariant information (i.e., inductive bias).** Specifically, [1] observed that the effectiveness of a node-level transformation is closely linked to the local subgraph rooted at the node, regardless of the circuit to which the node belongs. Based on this observation, they proposed extracting subgraphs rooted at individual nodes to generate node embeddings that capture inductive biases. These embeddings are capable of learning domain-invariant representations, thereby enabling the model to generalize effectively to unseen circuits. In this work, we follow [1] to train a GNN model with a strong inductive bias.\", \"title\": \"Rebuttal by Authors\"}",
"{\"title\": \"We would greatly appreciate hearing your feedback.\", \"comment\": \"Dear Reviewer oLyS,\\n\\nWe would like to express our sincere gratitude once again for your positive feedback, insightful comments, and constructive suggestions. Your guidance has been invaluable in helping us improve the quality of our work!\\n\\nWe are writing to gently remind you that **the author-reviewer discussion period will end in less than 36 hours**. We eagerly await your feedback to **understand if our responses have adequately addressed your concerns**. **If so, we would deeply appreciate it if you could raise your score**. If not, we are eager to address any additional queries you might have, which will enable us to further enhance our work.\\n\\nOnce again, thank you for your kind support and constructive suggestions!\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"We would greatly appreciate hearing your feedback.\", \"comment\": \"Dear Reviewer oLyS,\\n\\nWe are writing to kindly remind you that the deadline for the author-reviewer discussion period is fast approaching (**ending in two days** on December 2nd). We greatly value your feedback and are eager to hear your thoughts on whether our responses have sufficiently addressed your concerns. Thank you once again for your thoughtful comments and valuable support throughout this process.\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"We eagerly await your feedback\", \"comment\": \"Dear Reviewer oLyS,\\n\\nWe are writing to gently remind you that **the deadline for the author-reviewer discussion period is approaching** (due on December 2nd). We eagerly await your feedback to understand if our responses have adequately addressed all your concerns. *If so, we would deeply appreciate it if you could raise your score*. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. We sincerely thank you once more for your insightful comments and kind support.\\n\\nBest,\\n\\nAuthors\"}"
]
} |
EFzBhrEp8Y | MME-FINANCE: A Multimodal Finance Benchmark for Expert-level Understanding and Reasoning | [
"Ziliang Gan",
"Yu Lu",
"Dong Zhang",
"Haohan Li",
"Che Liu",
"Jian Liu",
"Ji Liu",
"Haipang WU",
"Chaoyou Fu",
"Zenglin Xu",
"Rongjunchen Zhang",
"Yong Dai"
] | The remarkable capability of existing Multimodal Large Language Models~(MLLMs) to understand general natural images have been extensively demonstrated in plentiful benchmarks. Nevertheless, the potential of MLLMs in finance domain remains to be fully explored. Financial images exhibit a wide range of variations, encompass intricate details, and demand professional expertise for proper interpretation, thereby posing a significant challenge for MLLMs in terms of their fine-grained perception and complex reasoning capabilities. To bridge this gap, we introduce MME-FINANCE, a novel benchmark designed specifically to assess MLLMs' performance in open-ended financial Visual Question Answering (VQA). Our benchmark consists of over 1,000 VQA pairs spanning a wide range of complex financial scenarios. We devise multi-tiered financial tasks tailored to the specific characteristics of the financial domain, aiming to comprehensively evaluate the perception, reasoning, and cognition capabilities of MLLMs.
Furthermore, we employ a multimodal evaluation approach that incorporates visual data to score the model predictions, thereby aligning more closely with human judgment. Extensive experimental evaluations of 18 mainstream MLLMs reveal their limitations in financial tasks and provide insights to inspire further research. | [
"Multimodal; Benchmark"
] | https://openreview.net/pdf?id=EFzBhrEp8Y | https://openreview.net/forum?id=EFzBhrEp8Y | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"hd9NT6CZxB",
"Yb2jw1hX1P",
"SjPnxXjWac",
"SNz1O0auUy",
"Gcubs6x4zS"
],
"note_type": [
"official_review",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1730767093615,
1731657913528,
1730696303374,
1730682906989,
1730081375894
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5673/Reviewer_Kib8"
],
[
"ICLR.cc/2025/Conference/Submission5673/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5673/Reviewer_RUz1"
],
[
"ICLR.cc/2025/Conference/Submission5673/Reviewer_KL7R"
],
[
"ICLR.cc/2025/Conference/Submission5673/Reviewer_GWAX"
]
],
"structured_content_str": [
"{\"summary\": [\"This work presents a multimodal understanding benchmark specifically for evaluating the capabilities of MLLMs in financial domains. With different image types and question focus, the benchmark could provide a comprehensive analysis of MLLM's capabilities in the financial domain.\", \"Given the open-ended nature of some financial questions, the authors provide a novel evaluation strategy for better aligning with humans. By conducting an extensive evaluation of 18 MLLMs, insights about their ability in the financial domain are provided.\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This work investigates a rarely-explored setting for evaluating MLLM performance, particularly the financial domain. This focus on financial analysis is timely given the increasing complexity of financial data and the need for advancing analytical tools.\", \"The proposed benchmark employs a rigorous multimodal evaluation method combined with image information, which enhances the alignment with human judgment.\", \"The authors conducted thorough evaluations on 18 mainstream MLLMs, revealing critical insights into their limitations in processing financial tasks.\"], \"weaknesses\": [\"While the benchmark covers various types of financial images (e.g., candlestick charts, statistical charts), it may not encompass all possible scenarios encountered in real-world finance. Given the relatively small number of image-question pairs (1171), the limitation could affect the generalizability of the findings to broader financial contexts.\", \"The specialized context in the financial domain requires the reliance on expert annotators. Although the evaluation scores show a high relevance with human-annotated scores, this reliance may still introduce biases or inconsistencies based on individual interpretations of financial data.\"], \"questions\": [\"What considerations lead to the selection of the six types of financial images included in MME-FINANCE, and are there plans to expand this scope to include additional types of financial data in future iterations?\", \"Another concern is on the hallucination evaluations, as the number of evaluation samples is quite small. Considering that hallucinations could also happen in other capabilities and tasks evaluations, can you provide more details on how MME-FINANCE evaluates hallucinations in MLLMs?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper proposes MME-FINANCE to evaluate the financial capability of MLLMs. MME-FINANCE consists of three levels, including Perception, Cognition and Reasoning. They collect data from financial images and manually check the automatically generated question and answer by GPT-4o. They conduct experiments to show the financial capability of MLLMs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The category of this benchmark is novel. MME-Finance is the first benchmark to evaluate the financial capabilities of MLLMs.\\n2. The paper is well-written and easy to follow.\", \"weaknesses\": \"1. The evaluation process may not be reliable using GPT-4o as an evaluator shown in Fig. 4. The question and answer generated by GPT-4o still requires manual check. How to ensure the correctness using GPT-4o. Maybe the multiple-choice format is more reliable.\\n2. I think some classes of MME-FINANCE are not necessary, such as the capabilities of Image Caption, or OCR. There have been several benchmarks to evaluate these capabilities. Creating a new benchmark should focus on one specific capability. The hierarchical design of MME-Finance can be improved.\\n3. COT evaluation would enhance the comprehensiveness of experiments.\", \"questions\": \"No\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"Existing Multimodal Large Language Models (MLLMs) excel in understanding general natural images but face challenges in interpreting complex financial images, which require specialized knowledge and fine-grained reasoning. To address this gap, the authors propose the MME-FINANCE benchmark which is focusing on evaluating MLLMs' performance in open-ended financial Visual Question Answering (VQA). This benchmark includes over 1,000 VQA pairs across diverse financial scenarios, with tasks tailored to assess perception, reasoning, and cognitive abilities specific to finance. Using a multimodal evaluation approach aligned with human judgment, experiments on 18 mainstream MLLMs highlight their current limitations in finance, offering insights for further advancement.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed benchmark is a new task for MLLM.\\n\\nThe efforts are awesome.\", \"weaknesses\": \"1. How did the expert revision conduct? More info should be enclosed. BTW, what is the \\\"experts reversion\\\" in Figure 2?\\n2. Using GPT4o to evaluation the performance of GPT4o is not fair. Cross-validation should be considered.\\n3. Lack of comparisons of Claude and Gemini, which is critical.\", \"questions\": \"See Weaknesses. More models should be considered for evaluation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors present a multimodal financial chart understanding benchmark and evaluate the performance of 18 models using a prompt-based evaluation method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The first multimodal benchmark for financial knowledge Q&A, filling a gap in the field.\\n2. Charts are sourced from real-world data, and the Q&A has been manually reviewed and refined.\\n3. The models tested are fairly new and comprehensive.\\n4. The analysis of different MLLMs as judges is notable for evaluations.\", \"weaknesses\": \"1. The authors do not clarify the source of the charts, raising concerns about potential privacy or copyright issues.\\n2. There is a lack of overall dataset analysis. For example, is there a domain gap between the charts and other datasets like MME or SEED? If the charts come from a narrow range of sources, can the authors prove the diversity of chart styles (not just chart types) through clustering methods?\\n3. The GPT-4 prompt-based approach is already widely adopted, and the evaluation manner is costly.\\n4. The metadata only includes charts, not the ground truth (GT) for the Q&A. The GT is generated by GPT with human annotation. Reviewers would like to see more details on quality control, such as examples of bad cases, the preferences of 3 finance researchers during filtering, and the proportion of cases eliminated at each step. This would help assess the reliability of the GT.\", \"questions\": \"1. Why is it called MME-FINANCE? Is it meant to complement MME or does it have a different meaning for MME?\\n2. As a multimodal financial benchmark, do the model performance trends align with those from text-only benchmarks?\\n3. Apart from cognition tasks requiring financial knowledge, how different are the other tasks from mainstream benchmarks like MME? Do the evaluation results align with these mainstream benchmarks?\\n4. Could the authors provide more insights, such as how to improve the performance of such models after such a comprehensive MLLM review?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EFhzmn3RJG | Taming Gradient Oversmoothing and Expansion in Graph Neural Networks | [
"MoonJeong Park",
"Dongwoo Kim"
] | Oversmoothing has been claimed as a primary bottleneck for multi-layered graph neural networks (GNNs). Multiple analyses have examined how and why oversmoothing occurs. However, none of the prior work addressed how optimization is performed under the oversmoothing regime. In this work, we show the presence of $\textit{gradient oversmoothing}$ preventing optimization during training. We further analyze that GNNs with residual connections, a well-known solution to help gradient flow in deep architecture, introduce $\textit{gradient expansion}$, a phenomenon of the gradient explosion in diverse directions. Therefore, adding residual connections cannot be a solution for making a GNN deep. Our analysis reveals that constraining the Lipschitz bound of each layer can neutralize the gradient expansion. To this end, we provide a simple yet effective normalization method to prevent the gradient expansion. An empirical study shows that the residual GNNs with hundreds of layers can be efficiently trained with the proposed normalization without compromising performance. Additional studies show that the empirical observations corroborate our theoretical analysis. | [
"graph neural network",
"deep neural network",
"oversmoothing",
"optimization"
] | https://openreview.net/pdf?id=EFhzmn3RJG | https://openreview.net/forum?id=EFhzmn3RJG | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"k5Gq8KDET5",
"YywRu1Vj72",
"Sh853lX2wz",
"RbO0uvVTRg",
"NjrALrPPED",
"3EghJhQrhx"
],
"note_type": [
"official_review",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1730636562303,
1732263897710,
1730129460045,
1730653287456,
1730168622488,
1732263829933
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission4172/Reviewer_7mh1"
],
[
"ICLR.cc/2025/Conference/Submission4172/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4172/Reviewer_nbzZ"
],
[
"ICLR.cc/2025/Conference/Submission4172/Reviewer_m3nv"
],
[
"ICLR.cc/2025/Conference/Submission4172/Reviewer_Yjtw"
],
[
"ICLR.cc/2025/Conference/Submission4172/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"This paper studies the behavior of gradients in deep GNNs by considering the oversmoothing problem. The authors show that when GNNs are very deep, the representations become similar (which is a known thing), and that the gradients in first layers become very similar (makes sense by inspecting the Jacobian of the GNN).\\n\\nThe authors perform several experiments, mostly on simple node classification datasets, and show the training/test accuracies alongside the gradient and feature similarities.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"In terms of strengths, the paper is nicely written and easy to follow. It also sheds more light on the oversmoothing problem. The experiments conducted to understand the theoretical findings are in-order.\", \"weaknesses\": \"In terms of relevance and novelty, as well as related works, I think that the main issue is that most of the presented results are already quite known within the community, and can be found in the literature. For example see \\\"A survey on oversmoothing in graph neural networks\\\" and \\\"Simplifying the Theory on Over-Smoothing\\\". Also, the authors lack a discussion of \\\"Revisiting Graph Neural Networks: All We Have is Low-Pass Filters\\\".\\n\\nIn terms of using the Lipschitz constant and bounding it, it was shown in \\\"On the Robustness of Graph Neural Diffusion to Topology Perturbations\\\" and \\\"Contractive Systems Improve Graph Neural Networks Against Adversarial Attacks\\\" that using it can help to address robustness problems, so it would be interesting to understand the connection between oversmoothing and such approaches. \\n\\nIn terms of experiments, the authors used mostly simple datasets, and I think that it would be beneficial to also study the performance on other tasks and datasets. I think it would also be interesting to see a report of the Dirichlet energy, that is usually reported in oversmoothing studies. Also, I feel that the experiments lack comparisons with other methods that can address oversmoothing (there are already many such methods).\", \"questions\": \"Please see my suggestions in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We have decided to withdraw our submission to further develop the work based on the valuable insights gained from the reviews. We sincerely appreciate all the reviewers for their constructive feedback and time.\"}",
"{\"summary\": \"The paper analyzed the over-smoothing phenomenon in Graph Neural Networks (GNNs) from a unique perspective, uncovering the issue of gradient expansion in residual connections. To address the problem of gradient explosion, the author introduced a layer-wise Lipschitz constraint, which facilitates efficient training of residual GNNs and enhances their performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a comprehensive theoretical framework for analyzing the over-smoothing phenomenon in GNNs and gradient expansion in residual GNNs, introducing the concepts of representation similarity and gradient similarity to further explain these phenomena.\\n\\n2. The mathematical derivations are rigorous yet easy to comprehend.\\n\\n3. The author extensively investigates gradient similarity and representation similarity through numerous experiments, examining their variation throughout the training process.\\n\\n4. The paper proposes a Lipschitz constraint method that effectively alleviates the effects of over-smoothing and gradient expansion during the training process.\", \"weaknesses\": \"1. In line 147, the equation $$ \\\\||\\\\mathbf{X}\\\\||_F^2 = \\\\mu(\\\\mathbf{X})^2 + \\\\||\\\\mathbf{X} - \\\\mathbf{B}\\\\mathbf{X}\\\\||_F^2 $$ is presented. a step-by-step derivation or proof of this equation in an appendix, which will greatly enhance the clarity of the methodology presented.\\n\\n2. The authors provided a mathematical derivation for representation similarity and gradient similarity, validating their correctness in Figures 1 and 2. However, it is unclear which metric better measures over-smoothing, as the relationships among representation similarity, gradient similarity, and test accuracy appear ambiguous in Figure 3. It seems the authors primarily used gradient similarity to assess over-smoothing in Section 5. In references [1] and [2], Dirichlet energy appears to be a more effective metric compared to gradient similarity. It would be helpful if authors could provide a more detailed comparison between gradient similarity and other metrics like Dirichlet energy, including quantitative results, which would help clarify the advantages of their chosen metric.\\n\\n3. In my view, large datasets can mitigate oversmoothing compared to smaller ones. Intuitively, when aggregating features, each node has more distant neighbors, making it harder for features to become overly similar. As shown in [2], Pubmed and Ogbn-arxiv are less affected, as seen in Table 1. Could authors extend their theoretical analysis to consider the impact of dataset size on oversmoothing to enhance the theoretical persuasiveness of the paper?\\n\\n4. The Lipschitz constraint method is an effective approach to alleviating over-smoothing and gradient expansion. It would enhance the paper's strength to incorporate 2-3 specific baseline methods from the related work section and utilize 1-2 large datasets from references [1] and [2] that are particularly relevant for comparison.\\n\\n5. It appears that the theoretical derivation in the paper is based on the GCN model, yet the authors have also conducted some experiments on GAT, such as in Figure 4. Perhaps the authors should provide a corresponding theoretical analysis to enhance the persuasiveness of the paper. Additionally, if the model is changed to another one, such as GraphSage, would the theory still be useful?\\n\\n[1] A Survey on Oversmoothing in Graph Neural Networks \\n[2] Dirichlet Energy Constrained Learning for Deep Graph Neural Networks\", \"questions\": \"Refer to weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper argues that gradient oversmoothing and gradient expansion pose challenges in training deep Graph Neural Networks (GNNs). Unlike previous approaches that focus on node features, the authors apply a previously defined similarity measure to gradients. They also establish an asymptotic upper bound for this measure, describing the phenomenon where it approaches zero as \\\"gradient oversmoothing,\\\" and linking its growth to \\\"gradient expansion.\\\" To address these issues, the authors propose a novel normalization technique. Empirical results on various graph datasets and architectures demonstrate that this measure decreases as layer depth increases, and the new normalization approach enables successful training of deep GNNs.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper studies gradient behavior during training of deep GNNs. While this is a widely studied field, mainly linked to oversquashing these days, its full qualitative behavior remains to be studied in more detail. The paper is further a good combination of theoretical and empirical work.\", \"weaknesses\": \"My biggest concern is that the statement on gradient oversmoothing in the asymptotic case is likely to be equivalent to gradient vanishing, i.e., that gradients vanish whenever the proposed similarity measure vanishes and vice versa. In fact, very similar statements to Theorem 1 (i.e., bounds depending on the weight matrix W) can be made for the gradients directly (see Di Giovanni et al, 2024 for instance). Of course it can happen that in the pre-asymptotic case this gradient similarity measure is small or even zero at some point, but this does not denote an issue.\\n\\nThis has to be addressed, otherwise this paper cannot be published. In particular, the authors have to show that there exists an asymptotic regime, where the similarity measure in Theorem 1 goes to zero, but the gradient norms do not. I suspect there are no such cases, but I am happy to be convinced otherwise. The same has to be done for Theorem 2.\\n\\nThat being said, this has also be demonstrated empirically. The provided plots in the paper only show the effect of varying the depth on the similarity measure. The same has to be done for the gradient norms for the exact same setup, i.e., same architecture with same weights. This would demonstrate that there are in fact cases, where gradients oversmooth but do not vanish.\", \"other_issues\": [\"Some mathematical statements are wrong. For instance, the sentence after equation (3). Here, if B would be perpendicular to span($\\\\bf 1$), it should be in $\\\\mathbb{R}^{N-1,N}$, not in $\\\\mathbb{R}^{N-1,N-1}$. Moreover, this sentence does not add anything to the context and is simply taken from Wu et al. which used it to demonstrate the similarity between this measure and another oversmoothing measure in the literature.\", \"most of the theory seems to be taken from other papers. It would be good if the authors could state explicitly what is their own theoretical contribution\", \"it would be good to extend to nonlinear layers, since in practice linear GNNs are not very common\", \"Only four small-scale graph datasets are considered. It would strengthen the paper if this would be extended.\"], \"minor_issues\": [\"The writing needs to be revised. There are many typos.\"], \"questions\": \"Can you try some of the experiments with GNN architectures that are known to not suffer from gradient vanishing/explosion or oversmoothing? If your gradient similarity measure would go to zero or explode for these cases, it would strengthen your claim.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors analyze the effects of the addition of residual connections in **linear** GNNs **with no non-linear activation function** (specifically GCNNs) as an effort to compact oversmoothing (in extremely deep GNNs). In a pair of two results (Theorems 1 and 2) the authors analyize the decay of a (reasonable) orthogonal projection of the gradient of the loss with respect to pertrubations of entires of the feature matrix.\\n\\nThe use their theoretical upper bounds to suggest that spectral weight normalization, in the spirit of [1], would remedy oversmoothing. Only empirical studies are then used to support at (presumably) theoretical claim.\\n\\n\\n\\n[1] Neyshabur, Behnam, Russ R. Salakhutdinov, and Nati Srebro. \\\"Path-sgd: Path-normalized optimization in deep neural networks.\\\" Advances in neural information processing systems 28 (2015).\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The numerical experitments are very nicely executed, clear, convincing, and reproducible (but I still have questions as they do not seem to match the maths). The bounds are reasonable (even if the proof is extremely straightforward....) and interesting.\", \"weaknesses\": [\"**Identity Activation Only**: Only identity activation are considered (it doesn't seem to be difficult to obtain a result for non-linear activation fixing the origin from your proofs....\", \"**Connected Graphs** Why not consider disconnected graphs? It seems to me that with only a bit more work, you can generalize your result by noting that the normalized adjacency matrix has a block-diagonal form in the general (non-bipartide case).\", \"Results are for non-activated GNNs (identity activation function) but the numerics have non-linear activations (and result with identity activation are not plotted). This latter point, namely that there are no numerical experiments matching the theoretical setting, makes me wonder if there is a gap...Does the theory really come through in practice. One **needs** illustrations matching the architectural setup in the theorem.\", \"Theory is for very basic GCNN model (Which is definitely not universal by any means) but many other GNN models are considered in the experiments section. What is the relevance?\"], \"questions\": [\"As the graph size (number of nodes specifically) diverges, what happens to the proposed normalization?\", \"Do you not have any theoretical guarantees for the proposed procedure since you could not obtain matching lower bounds? I assume that would be required to argue that the normalization does the trick.\", \"Can you add a few more words explaining what the trouble is with bipartide?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Withdrawal of Submission and Thanks to Reviewers\", \"comment\": \"We have decided to withdraw our submission to further develop the work based on the valuable insights gained from the reviews. We sincerely appreciate all the reviewers for their constructive feedback and time.\"}"
]
} |
|
EFZEdHB3Mp | DynaPrompt: Dynamic Test-Time Prompt Tuning | [
"Zehao Xiao",
"Shilin Yan",
"Jack Hong",
"Jiayin Cai",
"Xiaolong Jiang",
"Yao Hu",
"Jiayi Shen",
"Cheems Wang",
"Cees G. M. Snoek"
] | Test-time prompt tuning enhances zero-shot generalization of vision-language models but tends to ignore the relatedness among test samples during inference. Online test-time prompt tuning provides a simple way to leverage the information in previous test samples, albeit with the risk of prompt collapse due to error accumulation. To enhance test-time prompt tuning, we propose DynaPrompt, short for dynamic test-time prompt tuning, exploiting relevant data distribution information while reducing error accumulation. Built on an online prompt buffer, DynaPrompt adaptively selects and optimizes the relevant prompts for each test sample during tuning. Specifically, we introduce a dynamic prompt selection strategy based on two metrics: prediction entropy and probability difference. For unseen test data information, we develop dynamic prompt appending, which allows the buffer to append new prompts and delete the inactive ones. By doing so, the prompts are optimized to exploit beneficial information on specific test data, while alleviating error accumulation. Experiments on fourteen datasets demonstrate the effectiveness of dynamic test-time prompt tuning. | [
"Test-time prompt tuning; test-time adaptation; vision-language model; CLIP"
] | Accept (Poster) | https://openreview.net/pdf?id=EFZEdHB3Mp | https://openreview.net/forum?id=EFZEdHB3Mp | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"y5k3etJNcv",
"qbuZAfKUV4",
"pH0i60z2rF",
"m5NpdzAJbp",
"kf7oznSo2F",
"dvS0LpjbLj",
"dj08kLyAjm",
"X4xGmFcfm9",
"Wn9vgurSCo",
"USzxYtWbko",
"RnEPtejpd6",
"Oae9E24ecR",
"Ejk562Nfva",
"CwAffmerL4",
"CEZA50pKDc",
"4HazmkmBHv",
"44rudhKm2P",
"1InyQkHpqE"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732520616813,
1730698992551,
1732141678317,
1732138031779,
1734491242033,
1732443354194,
1733140673746,
1730478529759,
1730710244333,
1732156343206,
1732564800632,
1732137759955,
1737523592043,
1732138615572,
1732564749488,
1730533375650,
1733140711907,
1732138887218
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3722/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3722/Reviewer_FeBK"
],
[
"ICLR.cc/2025/Conference/Submission3722/Reviewer_kMEy"
],
[
"ICLR.cc/2025/Conference/Submission3722/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3722/Area_Chair_5sPi"
],
[
"ICLR.cc/2025/Conference/Submission3722/Reviewer_tKkt"
],
[
"ICLR.cc/2025/Conference/Submission3722/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3722/Reviewer_kMEy"
],
[
"ICLR.cc/2025/Conference/Submission3722/Reviewer_tKkt"
],
[
"ICLR.cc/2025/Conference/Submission3722/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3722/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3722/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3722/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3722/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3722/Reviewer_qkjX"
],
[
"ICLR.cc/2025/Conference/Submission3722/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3722/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Thanks for your updates and encouragement. Your suggestions have helped us improve the manuscript.\"}",
"{\"summary\": \"This paper addresses fundamental issues in test-time prompt tuning, specifically focusing on the selection, updating, appending, and deletion of prompts. The authors introduce an innovative dynamic test-time prompt tuning approach, which incorporates two novel prompt evaluation metrics alongside a prompt buffer modification strategy. Extensive experimental results underscore the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-structured and easy to follow, with the three technical components clearly and accessibly presented.\\n2. The proposed method is well-reasoned, employing dynamic prompt selection and updating mechanisms that are both effective and distinct from prior studies, which primarily focus on data manipulation.\\n3. The experiments are thorough, and the results convincingly demonstrate the effectiveness of the proposed method.\", \"weaknesses\": \"1. The range of comparison methods could be expanded, as the paper overlooks one relevant comparison method [1].\\n2. Both the prediction entropy metric and probability difference metric provide insights into model prediction confidence, though from different perspectives. It is unclear why entropy is specifically used to measure relevance while difference is used to maintain diversity. Why does the direct combination of these two types of prompts yield effective results? Would a two-step prompt selection process, satisfying both conditions simultaneously, be more advantageous?\\n3. Details regarding the data augmentation set\\u00a0 $X_n$\\u00a0 are insufficiently discussed in this paper.\\n\\n[1] Dingchu Zhang, Zhi Zhou, Yufeng Li: Robust Test-Time Adaptation for Zero-Shot Prompt Tuning. AAAI 2024: 16714-16722\", \"questions\": \"Please refer to the `Weaknesses` section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I would like to thank the authors for their detailed responses to my questions. I have carefully reviewed their replies, which have addressed all of my concerns. Based on this, I have decided to increase my score.\"}",
"{\"title\": \"Response to Reviewer FeBK\", \"comment\": \"We thank Reviewer FeBK for the constructive feedback and insightful comments. We hope to address the concerns of the reviewer with the responses below.\\n\\n**Weaknesses**\\n\\n**Comparisons with AdaPrompt**\\n\\nThank you for pointing us to this related work on AdaPrompt by Zhang et al. We provide methodological and performance comparisons:\\n\\n1) Methodological Comparisons: AdaPrompt performs test-time prompt tuning on a batch of 64 test samples per step, leveraging a buffer to store confident, class-balanced samples for improved tuning and prediction. In contrast, our method dynamically selects and tunes prompts for each single test sample using augmentations.\\n\\n2) Performance Comparisons: As the benchmarks in AdaPrompt are not exactly the same as in our paper, we reproduced the method on the missing datasets using their released code. The comparisons are shown in the following tables, with reproduced results indicated in *italics*. \\nOur method performs competitively in the cross-dataset setting and outperforms AdaPrompt in the domain generalization setting. This performance gap in the domain generalization setting may arise from the large-scale label space (1000 for ImageNet-V2/S and 200 for ImageNet-A/R), which prevents AdaPrompt's sample buffer from storing class-balanced test samples, leading to degradation. By contrast, our method dynamically tunes the prompt for each sample with its augmentations, therefore achieving consistently good performance.\\n\\n\\n| Method | Caltech101 | Pets | Cars | Flower | Food101 | Aircraft | Sun397 | DTD | EuroSAT | UCF101 | Mean |\\n|--------------|------------|-------|-------|--------|---------|----------|--------|-------|---------|--------|--------|\\n| AdaPrompt | 94.07 | **89.64** | *63.29* | **72.97** | 84.72 | *21.21* | *65.37* | 44.75 | **47.20** | 67.22 | 65.04 |\\n| ***This paper*** | **94.32** | 88.28 | **67.65** | 69.95 | **85.42** | **24.33** | **66.32** | **47.96** | 42.28 | **68.72** | **65.52** |\\n\\n| Method | Imagenet-v2 | Imagenet-S | Imagenet-A | ImageNet-R | Mean |\\n|--------------|-------------|------------|------------|------------|-------|\\n| AdaPrompt | *59.32* | *47.72* | *47.71* | 73.98 | 57.18 |\\n| ***This paper*** | **64.67** | **48.22** | **56.17** | **78.17** | **61.81** |\\n\\nWe highlighted AdaPrompt in Related Work (Section 5) and added the comparisons in Section 6.\\n\\n\\n\\n**Dynamic prompt selection metrics and strategy**\\n\\nIndeed, the proposed two metrics are designed on prediction confidence, but they measure different properties of the model predictions.\\n1) Prediction entropy measures the relevance of the prompt and the sample: A lower prediction entropy means the prompt is more confident in its prediction (Niu et al. 2022; Zhang et al. 2024), indicating the prompt has more prior information about the sample. In this case, the prompt and sample are more relevant to each other.\\n\\n2) Probability difference measures the prompt\\u2019s sensitivity to the sample augmentations: A larger probability difference means the prompt is more sensitive to the sample augmentations, which implies the prompt predictions are more diverse and effectively avoids over-confident prompts.\\n\\nWe clarify that since we use the measurements of the initial prompt $v_0$ as thresholds to select subsets for both metrics, the selections are independent. By taking the intersection of the two subsets, the selected prompts have both lower entropy and larger probability differences. Therefore, no matter our selection with the two measures, be it separately or sequentially, the results are the same, simultaneously satisfying both conditions. We included these discussions in Section 4.\\n\\n\\n**Details of data augmentation**\\n\\nWe follow the typical data augmentation strategy in TPT (Shu et al. 2022). Specifically, we use AugMix (Hendrycks et al. 2020) to augment the original test image into 63 different augmentation samples, leading to 64 samples in total for each test image. Each test image is first augmented by *\\u201cresize\\u201d* and random *\\u201ccrop\\u201d*, then fed into the AugMix strategy with several augmentation methods including *auto contrast*, *equalization*, *posterization*, *rotation*, *solarization*, *shearing*, and *translating*. We added the data augmentation clarification in Appendix B.\\n\\nHendrycks D, Mu N, Cubuk E D, et al. AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty. ICLR 2020.\"}",
"{\"metareview\": \"The paper tackles the problem of test-time prompt tuning with a new approach that adaptively updates samples in the test-time learning pool and thus improves sample quality for test-time prompt tuning. The paper received four reviews of 1x accept, 2x borderline accept, and 1x borderline reject ratings. In general, the reviews are positive. The reviewers appreciated the idea of dynamic test-time prompt tuning and acknowledged the effectiveness as evidenced by the results. In the meantime, the reviewers had questions about the increased computational cost caused by the dynamic learning mechanism and requested more results using different backbones and more baselines. The authors had properly addressed these during the rebuttal. The proposed method is novel and the findings are valuable to the community. Therefore, the AC recommends that the paper be accepted.\", \"additional_comments_on_reviewer_discussion\": \"In the first-round review, the reviewers requested that the authors expand the comparisons to include more methods and analysis, as well as more results using different backbones. The rebuttal provided additional results to support the paper. Reviewer kMEy engaged in the rebuttal discussion and increased the score to accept. Reviewer FeBK did not engage. The AC has checked the rebuttal and found that the rebuttal has done good job in addressing these concerns.\"}",
"{\"comment\": \"Thanks for your responses. Most of my concerns have been addressed and I would like to maintain my original rating. This is a meaningful work, which tends to be accepted.\"}",
"{\"comment\": \"Dear reviewer FeBK,\\n\\nI hope this message finds you well. Thank you for your time and efforts in reviewing our submission. Your insights and expertise are greatly appreciated.\\n\\nWe submitted our rebuttal on November 20 and value your evaluation and feedback. As the discussion period is nearing its conclusion in two days, we kindly follow up for your review of our response.\\n\\nPlease feel free to let us know if you have any additional questions to discuss. We are more than willing to provide further clarification or engage in discussion to address any concerns.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"summary\": \"This paper introduces DynaPrompt, a test-time prompt-tuning (TPT) approach that exploit information from previous test samples while avoiding error accumulation. While naively adapting TPT to the online setting lead to collapse, DynaPrompt leverages a dynamic buffer of prompts and optimize only the most relevant prompts for each test sample. Furthermore, DynaPrompt introduces only one additional hyper-parameter, the buffer size, thanks to an adaptative thresholding strategy. DynaPrompt demonstrate consistent improvement over TPT and its variants (CoOp+TPT, MaPLe+TPT) making it a simple and effective alternative.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Motivation**: The authors made a good job motivating the paper, illustrating the potential benefit of moving to the on-line scenario while showing the non-triviality of extending TPT to this setup.\", \"**Contribution**: The technical contribution of the paper is simple and effectively solve a clearly identified issue.\", \"**Clarity**: The paper is easy to follow and arguments are clearly articulated.\", \"**Experiments**: The experiments are convincing and show consistent improvements over the baselines.\"], \"weaknesses\": [\"**Missing experiments** : Some experiments are missing like evaluating on different backbones and more importantly evaluation on the Imagenet dataset. I will consider raising my score if results on the Imagenet dataset are added.\"], \"questions\": [\"Unless I missed it, it seems that there is no results on the Imagenet dataset. Could you provide results on the Imagenet dataset ?\", \"Could you provide evaluation results on other CLIP vision backbones such as ViT-B/32 or Resnet-50 ?\", \"Why did you not included results for CoOp + DynaPrompt in Table 2 ?\", \"If I understand your method correctly, the buffer initially contains only one prompt initialized as \\u2018a photo of a\\u2019. For a given test sample, if your selection criteria (entropy and probability difference) are not met, a new prompt is initialized with \\u2018a photo of a\\u2019; otherwise, selected past prompts are used for optimization. I\\u2019m curious whether you considered initializing a set of prompts from the beginning and using your selection criteria to optimize subsets of this buffer. Initially, all prompts would be identical, but a small degree of randomness at the start could help address this issue. Do you think collapse would occur in that scenario? An experimental result would be helpful, but your thoughts or intuition on this would be sufficient.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a dynamic test-time prompt tuning approach that enhances zero-shot generalization in vision-language models. It leverages the beneficial information from previous online samples to adaptively select and optimize prompts, reducing error accumulation and improving model performance across various datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-organized and easy to follow.\\n2. The proposed DynaPrompt effectively mitigates error accumulation, a prevalent challenge in online test-time tuning\\uff0cleading to more stable performance across sequential test samples while exploiting beneficial information from prior online test samples.\\n3. Despite the increased time costs associated with larger prompt buffer sizes, the experimental outcomes confirm the effectiveness of the proposed method.\", \"weaknesses\": \"1. In prompt learning, the initial prompts might affect the final performance. I wonder whether a similar situation can occur with the proposed method. The authors are encouraged to conduct related experiments.\\n2. Could the proposed method be extended to incorporate visual prompts, thereby evolving into a multimodal approach? Additionally, when integrated with MaPLe, is the method only applied to the textual branch?\", \"questions\": \"1. I am curious about how the order for each round in the 'Sensitivity to test time sample order' section was set.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Your suggestions help improve our manuscript a lot. Thanks for your updates and prompt encouragement.\"}",
"{\"title\": \"Looking forward to your response\", \"comment\": \"Dear Reviewer qkjX,\\n\\nWe sincerely thank you for the insightful review. We appreciate the time and effort you put into reviewing our work. We have carefully considered your comments and made improvements based on your suggestions.\\nAs the discussion period will end in the next two days, please feel free to let us know if you have any further comments. We are willing to engage in further discussion.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Response to Reviewer tKkt\", \"comment\": \"We thank Reviewer tKkt for the constructive feedback and insightful comments. We hope to address the concerns of the reviewer with the responses below.\\n\\n**Weaknesses**\\n\\n**Effect of initial prompts**\\n\\nThank you for sharing the insight. We conducted experiments on ImageNet-A using various initial text prompts. As shown in the following table, the initial prompts affect the performance of CLIP (Radford et al. 2021), TPT (Shu et al. 2022), as well as our method. The reason can be related to the initial predictions of the original CLIP model. Nonetheless, our method consistently outperforms TPT, showing robustness despite variations in initialization. We added this experiment and discussions in Appendix C.\\n\\n| Initial prompt | CLIP | TPT | *This paper* |\\n|------------------------|-------|-------|------------|\\n| a photo of a | 47.87 | 54.77 | **56.17** |\\n| an image of a | 48.31 | 54.84 | **56.19** |\\n| high-quality of a | 44.48 | 51.47 | **52.57** |\\n| Identify feature of | 46.08 | 50.18 | **51.84** |\\n| Visual features of the | 46.21 | 51.48 | **53.13** |\\n| natural photo of a | 44.82 | 52.56 | **54.08** |\\n| classify the photo of | 46.60 | 52.51 | **53.27** |\\n| average | 46.35 | 52.54 | **53.89** |\\n\\n\\n**Multimodal test-time prompt tuning**\\n\\nWe clarify that our approach can be evolved into a multimodal setting based on MaPLe. When integrated with MaPLe, our dynamic test-time prompt tuning is already applied on *both* the textual and visual branches. We clarified the corresponding implementation details in Section 6.\\n\\n**Questions**\\n\\n**How to set sample orders during dynamic prompt tuning**\\n\\nWe shuffle the sample order in the dataloader with different random seeds at test time, which leads to different sample orders. We added this description in Section 6 and will release the corresponding PyTorch code.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response to Reviewer qkjX\", \"comment\": \"We thank Reviewer qkjX for the constructive feedback and insightful comments. We hope to address the concerns of the reviewer with the responses below.\\n\\n**Weaknesses**\\n\\n**Time costs of dynamic selection and appending**\\n\\nWe acknowledge our improved prediction performance comes with an increased computational cost. We note that most of the additional time cost stems from optimizing multiple prompts rather than the selection and appending strategy. For instance, on ImageNet-A, with a prompt buffer size of 10, the total processing time per test sample is approximately 0.39 seconds, of which the selection and appending steps account for only 0.004 seconds. We will clarify this in Section 6.\\n\\n**Ablations on prompt length**\\n\\nWe thank the reviewer for sharing the insight. To investigate the prompt length effect, we experiment with longer prompts for both TPT and Online TPT. We set the prompt length to 40, which is 10 times longer than the 4-item original *\\u201ca photo of a\\u201d*. We consider two types of long prompts: (A) 10 times copy of *\\u201ca photo of a\\u201d*, (B) A prompt generated by GPT-4o: *\\u201cLet us solve an image classification task: a photo of a distinct object, animal, plant, or scene, captured in diverse environments and representing meaningful categories. Carefully analyze its features; the exact category of the photo is a\\u201d*.\\n\\nAs shown in the following table, longer trainable prompts do not solve the problem of prompt collapse for online learning, even worsening the problem. Since the online testing fails due to error accumulation and prompt collapse, simply improving the length of the prompts does not help. Specifically designed long prompts (B) perform better on the optimization-free CLIP model. However, it may lead to more difficult optimization for test-time tuning, resulting in worse TPT performance. By contrast, based on the prompt selection and appending strategy, our method achieves better performance while reducing error accumulation and prompt collapse. We added the experiments in Appendix C.\\n\\n| Method | Initial prompt | Prompt length | Accuracy |\\n|----------------|-----------------------------|-------------------------|----------|\\n| CLIP | *\\\"a photo of a\\\"* \\t | 4 | 47.87 |\\n| | long prompt (A) | 40 | 46.99 |\\n| | long prompt (B) | 40 | 48.21 |\\n| TPT | *\\\"a photo of a\\\"*\\t | 4 | 54.77 |\\n| | long prompt (A) | 40 | 52.97 |\\n| | long prompt (B) | 40 | 52.23 |\\n| Online TPT | *\\\"a photo of a\\\"*\\t | 4 | 6.96 |\\n| | long prompt (A) | 40 | 2.06 |\\n| | long prompt (B) | 40 | 4.24 |\\n| ***This paper*** | *\\\"a photo of a\\\"* \\t | 4 * 10 | **56.17** |\\n\\n\\n\\n\\n\\n\\n**Questions**\\n\\n**Prompt deleting**\\n\\nIn our method, deleting means the entire prompt is removed from the buffer, we clarified the text accordingly.\"}",
"{\"title\": \"Looking forward to your response\", \"comment\": \"Dear Reviewer FeBK,\\n\\nWe sincerely thank you for the insightful review. We appreciate the time and effort you put into reviewing our work. We have carefully considered your comments and made improvements based on your suggestions.\\nAs the discussion period will end in the next two days, please feel free to let us know if you have any further comments. We are willing to engage in further discussion.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"summary\": \"This paper proposes a new test-time prompt tuning method called DynaPrompt, which leverages a dynamic prompt buffer to extract beneficial information from online test samples while reducing error accumulation. The method adaptively selects and optimizes prompts for each test sample, enabling the selected prompts to integrate relevant information from previous test samples, thus improving the prediction performance of the current sample. Experimental results demonstrate that this method performs effectively across multiple benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper propose a dynamic prompting method that utilizes useful information from online test samples while mitigating the problem of error accumulation.\\n2. DynaPrompt improves the adaptive capability of the model by introducing a dynamic prompt selection strategy that adaptively selects and optimizes relevant prompts for each test sample based on two metrics: predictive entropy and probability difference.\\n3. During the dynamic prompt selection process, if no suitable prompts can be found, DynaPrompt employs a dynamic prompt appending strategy to append new initial prompts to the set of prompts and remove the least active prompts, thus effectively incorporating information from the new data distribution.\", \"weaknesses\": \"1. The computational complexity increases, and the authors' approach requires dynamic updating and selection of cues at each stage, whether it introduces more computational time.\\n2. The authors' approach demonstrates the advantages of dynamic prompting, however did the authors consider whether comparable performance could also be achieved if online testing was performed from scratch using a prompt length comparable to that of the final model.\", \"questions\": \"1. It is recommended that the authors add ablation experiments to further demonstrate the validity of the methodology by performing online tests from scratch using the same prompt lengths as the final model and performing performance comparisons.\\n2. The author's paper mentions deleting the prompt, does the author mean that the entire prompt is deleted in its entirety, or does he mean that only the parameters of the prompt are set to zero?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear reviewer qkjX,\\n\\nI hope this message finds you well. Thank you for your time and efforts in reviewing our submission. Your insights and expertise are greatly appreciated.\\n\\nWe submitted our rebuttal on November 20 and value your evaluation and feedback. As the discussion period is nearing its conclusion in two days, we kindly follow up for your review of our response.\\n\\nPlease feel free to let us know if you have any additional questions to discuss. We are more than willing to provide further clarification or engage in discussion to address any concerns.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Response to Reviewer kMEy\", \"comment\": \"We thank Reviewer kMEy for the constructive feedback and insightful comments. We hope to address the concerns of the reviewer with the responses below.\\n\\n\\n**Questions**\\n\\n**Experiments on ImageNet**\\n\\nWe provide ImageNet results of our method, based on the ViT-B/16 backbone, in the following table, Our method achieves better performance than CLIP and TPT through dynamic prompt tuning. \\nCombined with the prompt learning methods CoOp and MaPLe, the performance of our method is further improved. We added the ImageNet results in Table 1.\\n\\n| Method | ImageNet |\\n|--------------------|----------|\\n| CLIP | 66.73 |\\n| CoOp | 71.51 |\\n| CoCoOp | 71.02 |\\n| MaPLe | 70.72 |\\n| TPT | 68.98 |\\n| ***This paper*** | 69.61 |\\n| CoOp + TPT | 73.61 |\\n| CoOp + ***This paper*** | **74.08** |\\n| MaPLe + TPT | 71.87 |\\n| MaPLe + ***This paper*** | 72.71 |\\n\\n\\n\\n\\n**Experiments with different backbones**\\n\\nTo evaluate the proposed method on different backbones, we conduct experiments for ImageNet-based datasets with the ResNet-50 and ViT-B/32 backbones. The experiments are provided in the following table. No matter the backbone, our method outperforms CLIP and TPT. We added these results to Appendix C.\\n\\n| RN 50 | ImageNet | ImageNet-V2 | ImageNet-S | ImageNet-A | ImageNet-R | mean | OoD mean |\\n|------------|----------|-------------|------------|------------|------------|-------|----------|\\n| CLIP | 58.16 | 51.41 | 33.37 | 21.83 | 56.15 | 44.18 | 40.69 |\\n| TPT | 60.74 | 54.7 | 35.09 | 26.67 | 59.11 | 47.26 | 43.89 |\\n| ***This paper*** | **61.56** | **55.12** | **35.64** | **27.84** | **60.63** | **48.16** | **44.81** |\\n\\n\\n| ViT-B/32 | ImageNet | ImageNet-V2 | ImageNet-S | ImageNet-A | ImageNet-R | mean | OoD mean |\\n|------------|----------|-------------|------------|------------|------------|--------|----------|\\n| CLIP | 62.05 | 54.79 | 40.82 | 29.57 | 65.99 | 50.64 | 47.79 |\\n| TPT | 63.64 | 57.22 | 41.66 | 34.63 | 69.42 | 53.31 | 50.73 |\\n| ***This paper*** | **64.72** | **58.10** | **42.04** | **36.05** | **70.46** | **54.27** | **51.66** |\\n\\n\\n\\n\\n**Results for CoOp + DynaPrompt in the cross-dataset setting**\\n\\nWe provide the requested results on the cross-dataset setting. Our method outperforms TPT based on the CoOp pretrained prompt for 9 out of 10 datasets. We included these results in Table 2.\\n\\n| Method | Caltech | Pets | Cars | Flowers | Food101 | Aircraft | SUN397 | DTD | EuroSAT | UCF101 | Average |\\n|-------------------|----------|-------|-------|----------|----------|-----------|---------|-------|---------|---------|---------|\\n| CoOp + TPT | 93.15 | 89.48 | 66.77 | 68.48 | 86.48 | 20.51 | 66.06 | 43.32 | 37.73 | 68.91 | 64.09 |\\n| CoOp + ***This paper*** | 94.40 | 90.04 | 67.35 | 69.38 | 86.45 | 21.35 | 66.17 | 46.98 | 38.55 | 69.54 | 65.02 |\\n\\n\\n**Online prompts tuning with identical initialization**\\n\\nThe reviewer is correct that our buffer is initialized with only one prompt, and we append a new prompt initialized with \\u201ca photo of a\\u201d when no previous prompt is selected during online tuning. To demonstrate the effect of identically initializing a set of prompts, we conduct experiments on ImageNet-A with 10 initialized prompts in the prompt buffer. The prompts are initialized by the embedding of *\\u201ca photo of a\\u201d* with random noise. We use our selection strategy to select and optimize the prompts online. When no prompt is selected, we use the initial prompt *\\u201ca photo of a\\u201d* for prediction.\\n\\nAs shown in the following table, this variant outperforms online TPT and CLIP, which demonstrates that it reduces collapse since it provides diverse prompt options for different test samples through dynamic selection. \\nHowever, it underperforms TPT and our method. The reason can be that the negative influence of previous test samples during online updating is still not entirely solved by the limited number of predefined online prompts, which leads to error accumulation and suboptimal predictions.\\nWe added this experiment and discussion to Appendix C.\\n\\n\\n| Method | ImageNet-A | \\n|------------------------|-------|\\n| CLIP | 47.87 | \\n| Online TPT | 6.96 |\\n| Identical initialized online prompts with dynamic selection | 48.14 |\\n| TPT | 54.77 |\\n| ***This paper*** | **56.17** |\"}"
]
} |
EEgYUccwsV | AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials | [
"Yiheng Xu",
"Dunjie Lu",
"Zhennan Shen",
"Junli Wang",
"Zekun Wang",
"Yuchen Mao",
"Caiming Xiong",
"Tao Yu"
] | Graphical User Interface (GUI) agents hold great potential for automating complex tasks across diverse digital environments, from web applications to desktop software. However, the development of such agents is hindered by the lack of high-quality, multi-step trajectory data required for effective training. Existing approaches rely on expensive and labor-intensive human annotation, making them unsustainable at scale. To address this challenge, we propose AgentTrek, a scalable data synthesis pipeline that generates high-quality web agent trajectories by leveraging web tutorials. Our method automatically gathers tutorial-like texts from the internet, transforms them into task goals with step-by-step instructions, and employs a visual-language model (VLM) agent to simulate their execution in a real digital environment. A VLM-based evaluator ensures the correctness of the generated trajectories. We demonstrate that training GUI agents with these synthesized trajectories significantly improves their grounding and planning performance over the current models. Moreover, our approach is more cost-efficient compared to traditional human annotation methods. This work underscores the potential of guided replay with web tutorials as a viable strategy for large-scale GUI agent training, paving the way for more capable and autonomous digital agents. | [
"Data Synthesis",
"GUI Agent",
"Large Language Model"
] | Accept (Spotlight) | https://openreview.net/pdf?id=EEgYUccwsV | https://openreview.net/forum?id=EEgYUccwsV | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yq0dSgVnyp",
"xG4cuaPjSA",
"wFOjohuTnK",
"uGViNBbqNx",
"tyKcOpyS5s",
"tLSe1eQ9DY",
"rbMbyP5fmA",
"rIgMh0U47g",
"k9DCRp7f4T",
"eC2jjMfKat",
"d5JK6wraHg",
"aQXLe4qkQM",
"aGzqyvWN9J",
"YLu1tx3n7G",
"NFzCnQRBsH",
"MWH2bLhpqX",
"JKdEEaXJvZ",
"G33uKEP3z2",
"A6m69cYURD",
"4eGiRFrgeK",
"44c86sWycm",
"0egjmiuLQZ"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732554465260,
1732542385374,
1732586044611,
1732585439121,
1732642195225,
1732542577022,
1732554518741,
1730642281632,
1732562535131,
1733122561084,
1734672248190,
1730643163314,
1732736675451,
1730718821606,
1737523881412,
1732664178502,
1732542335843,
1732554390416,
1732554575473,
1732704102063,
1732542536599,
1732664700205
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Reviewer_fPH1"
],
[
"ICLR.cc/2025/Conference/Submission8010/Reviewer_fPH1"
],
[
"ICLR.cc/2025/Conference/Submission8010/Reviewer_SAfQ"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Reviewer_SAfQ"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Reviewer_MDGs"
],
[
"ICLR.cc/2025/Conference/Submission8010/Area_Chair_1571"
],
[
"ICLR.cc/2025/Conference/Submission8010/Reviewer_fPH1"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Reviewer_MDGs"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Reviewer_SAfQ"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8010/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Official Comment by Authors (2/4)\", \"comment\": \"> **W3: Grounding lags behind recent work focused specifically on grounding; And significant gian is from changing the backbone to Qwen2-VL.**\", \"a\": \"Thank you for your valuable feedback regarding grounding performance. We\\u2019d like to address your points and clarify the core focus and contributions of our work.\\n\\nFirst, our primary objective is to develop a scalable approach for generating **multi-step, high-quality agent trajectories**, addressing the current scarcity of trajectory data. We acknowledge that very recent grounding-focused research has achieved impressive results, largely by leveraging vast datasets (on the scale of millions) specifically curated for grounding tasks. **This approach differs fundamentally from our work, which focuses on guided replaying as a scalable method to generate multi-step trajectories, ultimately improving agents' planning and reasoning capabilities.** We hope that our contributions can complement these grounding-specific efforts to further advance agent capabilities in real-world evaluations.\\n\\nConducting GUI grounding evaluation on ScreenSpot aims to demonstrate the advantages of AgentTrek's diverse format data in enhancing GUI grounding capabilities. To address the concern that grounding improvements might primarily stem from the backbone model rather than the data itself, we conducted additional analyses. While **Qwen2-VL** indeed possesses better inherent grounding capabilities (achieving a baseline average score of 30.7), we also evaluated **LLaVA-OneVision** as a baseline due to its more transparent training pipeline. The results are summarized below:\\n\\n| **Model** | **Text** | **Icon/Widget** | **Average** |\\n|----------------------------|----------|------------------|-------------|\\n| **LLaVA-OneVision** | 0 | 0 | 0 |\\n| **Qwen2-VL** | 35.2 | 25.7 | 30.7 |\\n| **LLaVA-OneVision w/ AgentTrek** | 58.7 | 23.8 | 42.2 |\\n| **Qwen2-VL w/ AgentTrek** | 81.7 | 51.5 | 67.4 |\", \"as_the_table_shows\": \"1. **LLaVA-OneVision**, which lacks extensive training on natural-image grounding, performs poorly on GUI grounding tasks because it can not follow instruction to generate coordinates. However, after training with **AgentTrek**, its performance improves substantially, demonstrating the value of our dataset in enhancing grounding abilities, even for models without strong inherent grounding capabilities.\\n2. For **Qwen2-VL**, the inclusion of AgentTrek data leads to a dramatic improvement, particularly in GUI grounding tasks, further validating the effectiveness of our dataset.\\n\\nIt is important to note that **AgentTrek trajectories contains approximately 70K effective grounding pairs**, far fewer than the 1\\u20132 million pairs typically used in grounding-focused works. Despite this disparity, the performance improvement are promising, highlighting that AgentTrek not only excels in generating multi-step trajectories but also contributes to improving visual grounding capabilities.\\n\\nWe greatly appreciate your thoughtful feedback, and we hope this clarification underscores the complementary nature of our work to grounding-focused research and the broader potential of our contributions to agent development. \\n\\n---\"}",
"{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"---\\n\\n> **W3: Broader generalization for more complex computer control tasks which lack corresponding web tutorials or have very limited resources, especially those requiring highly precise and complex control.**\", \"a\": \"Thank you for your thoughtful feedback. **Generalizing to more complex computer control tasks, particularly those lacking web tutorials or with limited resources, is indeed a crucial and challenging goal.** We have taken steps to evaluate the effectiveness of our approach in such scenarios.\\n\\nTo assess whether AgentTrek data can help models in **truly out-of-domain (OOD) environments**, we fine-tuned **Qwen2.5-7B-Instruct** on AgentTrek data and evaluated it on **WebArena**. WebArena features self-hosted, non-public websites, making it a robust benchmark for testing model performance in realistic environments without accessible resources or tutorials. The results are summarized below:\\n\\n| **Model** | **WebArena Score** |\\n|----------------------------------|--------------------|\\n| **LLaMa3-chat-8B**[1] | 3.32 |\\n| **Qwen2.5-7B-Instruct** | 3.57 |\\n| **LLaMa3-chat-70B**[1] | 7.02 |\\n| **GPT-4o** | 13.1 |\\n| **Synatra-CodeLlama-7B**[1] | 6.28 |\\n| **AutoWebGLM (OOD SFT)**[2] | 8.5 |\\n| **AutoWebGLM (In-domain RFT)**[2] | 18.2* |\\n| **Qwen2.5-7B-Instruct w/ AgentTrek** | **10.46** |\", \"we_can_find_that\": \"1. **Significant Improvement with AgentTrek**: Fine-tuning with AgentTrek data significantly boosted the performance of **Qwen2.5-7B-Instruct**, closing the gap with GPT-4o and outperforming other open-source models. This underscores the effectiveness of AgentTrek data in enabling models to tackle tasks in realistic, resource-limited environments.\\n\\n2. **Broader Generalization Challenges**: While the results demonstrate progress, we recognize that supporting more complex and long-range control tasks requires further advancements. These tasks often lack readily available tutorials or structured resources, posing unique challenges.\\n\\n**Future Directions:**\\n\\nTo address these challenges, we are actively exploring iterative data generation and self-training with stronger open-source models: Using filtered, high-quality replay data to train new replay models interactively, bootstrapping the data generation loop. This approach can progressively enable the creation of more complex and higher-quality datasets, equipping models to handle advanced computer control tasks and explore environments with minimal resources.\\n\\nWe believe this iterative **self-training framework** can push the boundaries of what is possible in complex, resource-constrained scenarios. Thank you again for your insightful suggestions!\\n\\n---\\n\\nWe sincerely appreciate your detailed feedback. We hope the above response can address all your concerns. If you have any questions, we are pleased to provide further clarification!\\n\\n[1] Synatra: Turning Indirect Knowledge into Direct Demonstrations for Digital Agents at Scale, Ou et al., 2024\\n[2] AutoWebGLM: A Large Language Model-based Web Navigating Agent, Lai et al., 2024\"}",
"{\"comment\": \"And do the authors plan to open-source the dataset in the future?\"}",
"{\"comment\": \"Interesting statistics (considering Mind2Web also claims to use mainstream websites and domains). I will raise my rating. But overall, I would suggest make the work more comprehensive w.r.t the data analysis, and it doesn't need to emphasize improvements on grounding in my opinion. Good Luck.\"}",
"{\"title\": \"Thanks for the feedback!\", \"comment\": \"Bootstrap is an excellent approach to enhance open-source models for data construction. Additionally, I appreciate the results provided for OOD environments. My initial concerns were also related to tasks requiring precise control\\u2014specifically, coordinate-level control to execute actions effectively (it is very common and the tutorial may can not make effect for the fine-grained action). Given the time constraints for conducting additional experiments, do you have any thoughts or insights on addressing this aspect?\\n\\nI would like to modify my score higher if this issue can be addressed.\"}",
"{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"> **W3: Failure Case Analysis**\", \"a\": \"Thank you for your insightful focus on potential failure cases. During the replay process, we observed that the **number of steps in a tutorial** often correlates with task complexity, which affects replay success rates. To analyze this, we categorized tutorials based on their paraphrased step-by-step guidance into three complexity buckets: **Easy (0\\u20135 steps), Medium (6\\u20139 steps), and Hard (>10 steps)**. The results are summarized below:\\n\\n| **#Tutorial Steps** | **Success Rate** |\\n|----------------------|------------------|\\n| **Easy (0\\u20135)** | 53.0% |\\n| **Medium (6\\u20139)** | 48.9% |\\n| **Hard (\\u226510)** | 27.6% |\\n\\nThis analysis highlights the challenge of handling more complex tasks, as success rates decline for harder tutorials.\\n\\nIn addition to task complexity, we manually analyzed a sample of failed cases and identified another key factor: **tutorial expiration**. Specifically, some target websites had been updated or redesigned, rendering the tutorial instructions outdated and mismatched during replay. While we mitigated this issue by prioritizing **recent webpages** by timestamp in RedPajama during data collection, this challenge could become more prominent as the dataset scales. Appendix Figure 12 illustrates this issue.\\n\\nTo address tutorial expiration, we experimented with guiding the paraphrase model to update outdated tutorial information (e.g., adjusting booking dates to be current). This showed promising improvements but was not scaled further due to time constraints. We plan to expand and validate this approach in future work to optimize the replay process and improve robustness.\\n\\nWe appreciate your valuable suggestion and will continue exploring ways to address these failure cases in future iterations. Thank you!\\n\\n---\\n\\nWe sincerely appreciate your detailed feedback. We hope the above response can address all your concerns. If you have any questions, we are pleased to provide further clarification!\"}",
"{\"title\": \"Official Comment by Authors (3/4)\", \"comment\": \"---\\n\\n> **W4: Weak performance on the ScreenSpot icon split raises concerns about possible overfitting on Mind2Web.**\", \"a\": \"Thank you for highlighting concerns about potential overfitting. We\\u2019d like to address your feedback comprehensively from three perspectives:\\n\\n---\\n\\n1. **Minimal Overlap Between AgentTrek and Mind2Web Test Websites**\\n\\n Our statistical analysis in W1 confirms that there is **minimal overlap** between the websites in the AgentTrek dataset and the Mind2Web test set. This strongly suggests that the performance improvements on Mind2Web are not due to overfitting on specific websites or tasks. Instead, the improvements are driven by AgentTrek\\u2019s high-quality, multi-step web agent trajectories. This limited overlap reinforces that the gains are genuine and come from improved generalization rather than overfitting.\\n\\n2. **ScreenSpot Grounding Performance vs. Mind2Web Evaluation**\\n\\n We understand the concerns regarding weaker absolute performance on icon grounding. However, we believe this does not imply overfitting to Mind2Web for the following reasons:\\n\\n- Web Tasks Focus on Textual Grounding: Web trajectory tasks, including those in Mind2Web, are predominantly focused on textual element grounding rather than icon grounding. For example, actions like TYPE, and SELECT_OPTION naturally emphasize textual grounding. Our analysis of Mind2Web trajectories shows that 90.62% of CLICK actions involve textual grounding, underscoring its primary importance.\\n\\n- Strong Textual Grounding Results: In the ScreenSpot evaluation, Qwen2-VL fine-tuned with AgentTrek achieves strong textual grounding performance (81.7%), demonstrating its ability to handle the core tasks of Mind2Web effectively. This grounding capability, combined with improved planning and reasoning abilities, drives overall benchmark improvements and further supports the conclusion that the model is generalizing rather than overfitting.\\n\\n3. **Additional Online Evaluation on WebArena**\\n\\n To further address concerns about potential overfitting to Mind2Web, we conducted an **online evaluation** using WebArena, a self-hosted, interactive testing environment that ensures a completely **out-of-domain (OOD) evaluation**, as no publicly available tutorials or associated training data exist for these sites.\\n\\n we converted AgentTrek\\u2019s trajectories into pure textual format and fine-tuned the **Qwen2.5-7B-Instruct** model. The results are as follows:\\n\\n| **Model** | **WebArena Score** |\\n|----------------------------------|--------------------|\\n| **LLaMa3-chat-8B**[1] | 3.32 |\\n| **Qwen2.5-7B-Instruct** | 3.57 |\\n| **LLaMa3-chat-70B**[1] | 7.02 |\\n| **GPT-4o** | 13.1 |\\n| **Synatra-CodeLlama-7B**[1] | 6.28 |\\n| **AutoWebGLM (OOD SFT)**[2] | 8.5 |\\n| **AutoWebGLM (In-domain RFT)**[2] | 18.2* |\\n| **Qwen2.5-7B-Instruct w/ AgentTrek** | **10.46** |\", \"key_insights\": \"- **Significant Improvements with AgentTrek**: \\n The fine-tuned Qwen2.5-7B-Instruct model, trained with AgentTrek data, achieves a **substantial performance boost**, significantly outperforming its untrained counterpart.\\n- **Best Performance Among Open-Source Models**: \\n The fine-tuned model achieves the highest performance among open-source web agents and approaches the performance of GPT-4o, demonstrating the effectiveness of AgentTrek data in improving real-world web agent capabilities.\\n- **Generalization to New Domains**: \\n The strong performance in WebArena\\u2019s OOD setting further validates that AgentTrek enhances generalization, addressing concerns about overfitting.\\n\\n\\nIn conclusion, these aspects demonstrate that AgentTrek contributes significantly to web agent capabilities without overfitting to specific tasks or domains. We hope this additional evidence alleviates your concerns!\\n\\n\\n[1] Synatra: Turning Indirect Knowledge into Direct Demonstrations for Digital Agents at Scale, Ou et al., 2024\\n\\n[2] AutoWebGLM: A Large Language Model-based Web Navigating Agent, Lai et al., 2024\"}",
"{\"summary\": \"The AgentTrek framework introduces a scalable pipeline for generating high-quality GUI agent trajectory data by utilizing web tutorials. It automates the collection of tutorial-like instructions from the internet, transforms them into structured tasks, and uses a visual-language model agent to simulate and execute these tasks in real digital environments. An evaluator model verifies the generated trajectories to ensure accuracy, reducing reliance on labor-intensive human annotations. Experimental results show that models trained with AgentTrek-generated data outperform those trained on existing datasets, especially in task planning and GUI grounding. This framework offers a cost-effective, automated solution for training GUI agents on a large scale.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The AgentTrek framework leverages online tutorials to automatically generate high-quality GUI agent trajectory data, automating the data generation process and reducing the need for manual data collection. Through an evaluator model, it verifies the generated trajectories, with a multi-layered generation and evaluation mechanism to ensure data quality and effectiveness.\\n\\nAgents trained on AgentTrek-generated data perform exceptionally well in several benchmark tests (such as ScreenSpot and Multimodal-Mind2Web), especially in task planning and GUI element recognition, significantly outperforming traditional datasets.\", \"weaknesses\": \"Certain technical details, such as automatic labeling and tutorial filtering, are only briefly mentioned in the paper, lacking more comprehensive explanations.\\n\\nThe paper notes that the success rate of generating effective trajectories is only 39.1%, based on GPT-4o Mini. Although GPT-4o Mini is relatively cost-effective, achieving larger-scale data generation with the current success rate remains challenging. There should be some indications if experiments are conducted with alternative open-source models to assess the feasibility and effectiveness of data construction within this framework.\\n\\nAnother consideration is the broader generalization of the framework for more complex computer control tasks. Many tasks may lack corresponding web tutorials or have very limited resources, especially those requiring highly precise and complex control, which will also lead to the data/categories bais issue. Do you have any thoughts or attemptions on these situations?\", \"questions\": \"See the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Updated Manuscript and Response to All Reviewers\", \"comment\": [\"We sincerely thank the reviewers for their thoughtful and constructive feedback, which has been invaluable in improving our work. We are particularly encouraged by the recognition of AgentTrek's ability to scalably synthesize high-quality agent trajectories without requiring human annotators. In particular, we appreciate the specific acknowledgments from reviewers:\", \"**`MDGs`**, **`fPH1`**, and **`SAfQ`** for highlighting the scalability and cost-effectiveness of our data synthesis pipeline and its significant impact on improving agent performance.\", \"**`MDGs`** for emphasizing the extensive diversity of our trajectories across multiple domains and task types.\", \"We are delighted to share that we have further scaled and verified over **10,000 agent trajectories**, which we believe constitutes the **largest web trajectory dataset** available. This growing dataset includes multi-modal resources such as screenshots, HTML/AXTREE structures, videos, and intermediate reasoning processes, all of which we plan to release publicly. We are confident that these contributions will offer valuable resources to advance GUI agent research.\", \"Based on the valuable feedback, we have addressed all concerns in our manuscript and added comprehensive details and explanations in our Appendix (updates are shown in purple for clarity):\", \"**Scaling Study:** Expanded AgentTrek to a **10K trajectory dataset** and conducted new studies demonstrating the effectiveness of scaling on agent performance.\", \"**Out-of-Domain Generalization:** Conducted an analysis of AgentTrek trajectory overlap with the Mind2Web evaluation dataset to confirm its performance in out-of-domain scenarios.\", \"**More Benchmark Validation:** Verified AgentTrek's effectiveness on **MiniWoB++** and a self-hosted realistic ** WebArena**, showcasing the generalizability of our synthesized data in textual modalities.\", \"**Pipeline Details:** Provided additional technical details about the AgentTrek pipeline, including expanded explanations of (pre)-filtering and evaluation processes.\", \"We believe these updates further underscore the potential of AgentTrek as a scalable trajectory synthesis pipeline for advancing GUI agent research. We hope the revised submission meets your expectations and demonstrates the value of our contributions.\", \"Thank you for your constructive feedback and support!\"]}",
"{\"comment\": \"Thanks for your response. I think my questions have been answered and I am willing to increase my rating.\"}",
"{\"metareview\": \"The reviewers are overall positive with the work. The authors contributed AgentTrek, a system for synthesizing web agent trajectory data from online tutorials. The authors propose a pipeline to collect tutorials, convert them into structured tasks, and use VLMs to simulate and evaluate these tasks. Results show that agents trained with AgentTrek data perform better in task grounding and planning. That said the reviewers raised several concerns. Reviewer fPH1 is concerned about potential overlap between AgentTrek data and the Mind2Web benchmark, which could undermine the out-of-domain evaluation. The reviewers questioned the grounding performance and wanted more evidence that the performance gain. In addition, the reviewers raised the concerns such as overfitting and generalization to complex tasks, and asked a number of questions regarding technical details. Overall, the reviewers and authors engage in a constructive discussion. The reviewers raise valid concerns, and the authors adequately address them. The paper presents a promising approach to synthesizing web agent trajectory data, and the proposed pipeline has the potential to be a valuable tool for training web agents.\\n\\nAdditional references\\n\\nMining tasks from web tutorials reminded me of the work by Li et al. ACL 2020 \\\"Mapping Natural Language Instructions to Mobile UI Action Sequences\\\"\\nand the most recent work on learning by reflection by Wang et al. \\\"Devil's Advocate: Anticipatory Reflection for LLM Agents\\\", EMNLP 2024. These works should be discussed in the revision.\", \"additional_comments_on_reviewer_discussion\": \"See the above.\"}",
"{\"summary\": \"The paper introduces AgentTrek, a data synthesis pipeline that generates web agent trajectories from online tutorials. It automatically collects tutorials, converts them into task sequences, and uses VLMs to simulate and evaluate these tasks. Results show that agents trained with AgentTrek data perform better in task grounding and planning than those using traditional datasets, providing a cost-effective solution for large-scale web agent training.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"A clear pipeline for generating complex agent trajectories from web tutorials.\", \"weaknesses\": \"1. In the Mind2Web experiment, it is clear that there is overlap between the training data and those in Mind2Web-test, with likely overlap in tasks, websites as well as domains. This overlap should be clarified, as it undermines the intended out-of-domain evaluation of Mind2Web.\\n2. The total number of trajectories remains limited, staying within the same scale as Mind2Web (in thousands). And the effectiveness is not good enough compared to the training data of Mind2Web (when the data size is 4x).\\n3. The comparison on grounding is not very meaningful, as performance significantly lags behind recent work focused specifically on grounding. (And empirically, a significant gain is from changing the backbone to Qwen 2 VL, compared to SeeClick and CogAgent.)\\n4. The web results on ScreenSpot, especially for icons, are not strong, which raises further questions about possible overfitting to specific websites or tasks in Mind2Web.\\n5. Some writing issues: \\n - baseline results should clearly indicate their sources\\n - duplicate entries in the reference\\n - wrong reference\\n - more details of the evaluation on Mind2Web should be provided. (This is not minor. As there are huge differences with respect to the settings in table 6)\\n - the synthesized data is essentially web agent trajectories. No need to always overclaim it to GUI agent trajectory. It only confuses people.\", \"questions\": \"1. The overlap issue to Mind2Web.\\n2. Is there any other way to support the effectiveness of the synthesized data?\\n3. If possible, show the effectiveness of the synthetic data when it is further scaled up.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you so much for raising our rating!\", \"comment\": \"Thank you for raising our rating! We sincerely appreciate your time and attention in providing valuable feedback and discussing insightful questions with us. It truly motivates us to keep improving and refining our approach!\"}",
"{\"summary\": \"1. The paper introduces AgentTrek, a scalable data synthesis pipeline that generates high-quality GUI agent trajectories\\nby leveraging web tutorials.\\n2. The method collects web tutorials from the internet, transforms them into structured task goals with step-by-step instructions, and uses a visual-language model (VLM) agent to simulate their execution in a digital environment. \\u200b A VLM-based evaluator ensures the correctness of the generated trajectories.\\n3. The paper provides experimental results and analayis showing that agents trained with the synthesized data outperform those trained on existing datasets in both grounding and planning capabilities.\\n4. The authors emphasize that the method is more cost-efficient than traditional human annotation methods, making it a practical solution for large-scale GUI agent training\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. It is a novel pipeline that leverages web tutorials to synthesize high-quality GUI agent trajectory data at scale. This is a valuable contribution to the field, addressing the scarcity of reliable and scalable trajectory data. The proposed pipeline significantly reduces the cost of data collection compared to traditional human annotation\\n2. The paper is well-structured and clearly explains the steps involved in the data filtering\\n3. The dataset is comprehensive, containing a wide range of task types, platforms, and web environments.\", \"weaknesses\": \"1. The paper does not provide a baseline to compare how using trajectory data compares with just using textual data. It would be beneficial to see how much the different elements( DOM/HTML structures, AXTree data, intermediate reasoning steps, full video recordings, and corresponding screenshots for each action) contribute to the dataset effectiveness.\\n2. The paper deals with only web-based tutorials and shows evaluation on only 2 benchmarks. It would be beneficial to expand the evaluation by including additional benchmarks such as MiniWob.\\n3. The paper lacks an analysis of potential failure cases. For example, is the trajectory data still as effective when the number of steps increase.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}",
"{\"comment\": \"Thank you very much for your positive feedback and improved rating! Your encouragement means a lot to us and motivates us to further improve our work. We are very pleased to elaborate on our improvement plans based on your suggestions.\\n\\n> **Interesting statistics (considering Mind2Web also claims to use mainstream websites and domains). I will raise my rating. But overall, I would suggest make the work more comprehensive w.r.t the data analysis, and it doesn't need to emphasize improvements on grounding in my opinion. Good Luck.**\", \"a\": \"Absolutely! We are fully committed to open-sourcing the dataset. As part of this effort, we have scaled the AgentTrek dataset to 10k trajectories in its first version. We plan to release these trajectories, including both reasoning steps and actions, in a multi-modal format encompassing HTML, Accessibility Trees, and Videos.\\nFurthermore, as discussed with Reviewer `SAfQ`, we are exploring an iterative self-training approach to further expand the dataset with fine-tuned open-source model to reduce reliance on closed-source models. This involves fine-tuning open-source models, such as Qwen2.5-72B-Instruct, using the current AgentTrek data to develop powerful replay agents that bootstrap the data generation loop. This approach offers two key benefits:\\n1. **Reduced Dependence on Closed-Source Models**: By iteratively training open-source models, we aim to gradually replace closed-source models while maintaining high performance.\\n2. **Scalable Data Creation**: This process facilitates the generation of more complex and higher-quality datasets over time, advancing scalable agent learning methodologies.\\n\\nWe have partially demonstrated the potential of this approach in our updated WebArena results, where a fine-tuned Qwen2.5-7B-Instruct model was able to independently complete web browsing tasks in an realistic online environment, achieving performance close to that of GPT-4o. With larger models and tutorial guidance, better results are anticipated.\\n\\nWe are excited about the prospect of not only releasing a larger-scale web agent trajectory dataset but also open-sourcing more capable models trained on this data. This effort will significantly advance AgentTrek, contributing to scalable agent learning research and benefiting the broader community.\\n\\nOnce again, thank you so much for your attention to our open-source plan!\"}",
"{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"We sincerely thank you for your recognition of our work! We deeply appreciate your acknowledgment of our data generation process. Furthermore, we are grateful for your recognition of the cost-effectiveness of AgentTrek\\u2019s data collection approach. As you noted, the high cost of human annotation is indeed a significant bottleneck, and our method effectively reduces this barrier. Lastly, we are thankful for your recognition of the effectiveness of our collected data. Our data has demonstrated improvements in GUI agents' planning and grounding capabilities across various datasets, outperforming traditional datasets. The approach introduced by AgentTrek will continue to contribute to scalable training for GUI agents and address cost challenges in GUI agent data collection.\\n\\nWe also noticed you have some constructive questions about our work, and we're happy to elaborate further below!\\n\\n---\\n\\n> **W1: More comprehensive explanation for automatic labeling and tutorial filtering.**\", \"a\": \"Thank you for your insightful suggestion about exploring open-source models to reduce costs. In this paper, we implemented several strategies to improve the success rate and lower the cost of generating effective samples, including:\\n\\n1. **Filtering up-to-date, high-quality webpages** during the initial data collection phase. \\n2. **Paraphrasing webpages** into step-by-step tutorials to make them easier for the model to follow. \\n3. **Optimizing prompts** to enhance the evaluator\\u2019s recall of positive samples. \\n\\nThese efforts significantly improved cost efficiency, and we appreciate your recognition of this aspect.\\n\\nWe agree that adopting open-source models could further reduce costs and enhance controllability. However, our experiments reveal that current open-source models still lag significantly behind closed-source alternatives in success rates. For example, we conducted replay experiments with **Qwen2.5-72B-Instruct**, and the results are as follows:\\n\\n| **Model** | **Effective Rate** |\\n|----------------------------------|------------------|\\n| **Qwen2.5-72B-Instruct** | 15.68% |\\n| **GPT-4o** | 47.74% |\\n\\nThe success rate for open-source models remains substantially lower, leading to a higher cost per positive sample. Consequently, we opted for closed-source models in this study to ensure data quality and scalability.\\n\\nThat said, open-source models hold promise for future exploration. Our **WebArena results** demonstrate that fine-tuning open-source models with AgentTrek data generated by GPT-4o significantly enhances their performance:\\n\\n| **Model** | **WebArena Score** |\\n|----------------------------------|--------------------|\\n| **GPT-4o** | 13.1 |\\n| **Qwen2.5-7B-Instruct** | 3.57 |\\n| **Qwen2.5-7B-Instruct w/ AgentTrek** | **10.46** |\\n\\nIn future work, we plan to explore fine-tuning **Qwen2.5-72B-Instruct** iteratively to bootstrap the data generation loop. This iterative approach could gradually reduce our reliance on closed-source models while maintaining high data quality. Thank you again for your valuable feedback!\"}",
"{\"title\": \"Official Comment by Authors (1/4)\", \"comment\": \"Thank you for taking the time to review our work and provide detailed feedback! We are grateful that you acknowledge how our work provides a scalable and cost-effective solution for web agent training compared to existing human annotation methods.\\n\\nWe also noticed you have some constructive questions about our work, and we're happy to elaborate further below!\\n\\n---\\n\\n\\n> **W1: Potiential Overlap with Mind2Web Test Data**\", \"a\": [\"Thank you for highlighting concerns about the data scale! We fully agree that **AgentTrek is a scalable approach** with great potential for further dataset expansion. We are actively working on generating more data and are excited to share that we have already verified and collected **10K agent trajectories**, which, to the best of our knowledge, represents the **largest web trajectory dataset** currently available. We are committed to releasing these trajectories in multi-modal format, including **screenshots, HTML/AXTREE, videos**, and **intermediate reasoning processes**. We hope these contributions will provide valuable resources for advancing agent research.\", \"Effectiveness is indeed a critical factor, as you pointed out. Our experiments demonstrate that our data **delivers significant performance improvements on Mind2Web**. While the synthetic data we introduced is larger in scale than Mind2Web, it is important to note that, as shown in the W1 overlap analysis, our dataset is predominantly **out-of-domain** relative to Mind2Web. This inherently gives the **in-domain Mind2Web-train data** an advantage in terms of data efficiency. However, even with this difference, **AgentTrek data consistently shows significant performance gains** across all splits, whether used for standalone training or in combination with Mind2Web-train. We believe these results strongly validate the **effectiveness and value** of the AgentTrek dataset.\", \"---\"]}",
"{\"title\": \"Official Comment by Authors (4/4)\", \"comment\": \"---\\n\\n> **W5: Writing and Presentation Issues**\", \"a\": \"Thank you for highlighting the importance of scalability! We fully agree that **scalability is a critical strength of AgentTrek**, offering great potential for further dataset expansion. As mentioned in W2, we are actively generating more data and are excited to share that we have already verified and collected **10K agent trajectories**. This milestone allowed us to systematically explore the effects of scaling up the dataset.\\n\\nTo evaluate this, we trained the model using varying proportions of the dataset (20% to 100%) and assessed its performance on **Multimodal-Mind2Web** across three splits. The results are summarized below:\\n\\n| **Data Amount** | **Cross-Task SR** | **Cross-Website SR** | **Cross-Domain SR** |\\n|------------------|-------------------|-----------------------|---------------------|\\n| **20%** | 36.1% | 35.5% | 39.5% |\\n| **40%** | 41.0% | 35.8% | 42.5% |\\n| **60%** | 41.6% | 37.2% | 42.8% |\\n| **80%** | 42.6% | 38.0% | 44.3% |\\n| **100%** | 42.6% | 37.5% | 45.0% |\\n\\n\\nPerformance on MM-Mind2Web improves steadily as more AgentTrek data is used, with the best results achieved when using the full dataset. This underscores the value of scaling up AgentTrek in enhancing model effectiveness.\\n\\nThese results highlight the importance of dataset scaling in improving web agent performance. As we continue to expand AgentTrek, we are excited about further unlocking its potential and exploring new applications. Thank you for your suggestion, and we appreciate your thoughtful feedback!\\n\\n----\\n\\nWe sincerely appreciate your detailed feedback. We hope the above response can address all your concerns. If you have any questions, we are pleased to provide further clarification!\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"The current rebuttal mostly addressed my concerns and I have increased my rating.\"}",
"{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"Thank you for recognizing our work! We are pleased that you highlighted the cost-efficiency and scalability of our approach in AgentTrek compared to the existing human annotationn. Additionally, we are pleased that you highlighted the diversity and effectiveness of our collected data, encompassing various task types and web environments. While existing GUI agent datasets are often limited to specific subdomains, our method enables GUI agent data to cover a broader range of domains, making the training of more powerful GUI agents possible.\\n\\nWe also noticed you have some constructive questions about our work, and we're happy to elaborate further below!\\n\\n---\\n\\n> **W1: Contribution of Various Components (ScreenShots/DOM/HTML/Video, etc.) to Dataset Effectiveness**\", \"a\": \"Thank you for your valuable suggestion regarding evaluation benchmarks. We understand the importance of broadening evaluations, and we hope the WebArena experiment presented in W1 addresses your concerns. WebArena is a challenging, online, out-of-domain (OOD) benchmark featuring self-hosted websites that are entirely unseen during training. Our results demonstrate that AgentTrek data not only improves web agent performance across multiple modalities but also generalizes effectively to WebArena OOD scenarios, highlighting its real-world applicability.\\n\\nWe also evaluate our model on MiniWoB++ as you mentioned, which also demonstrate the effectiveness of Qwen2.5-7B-Instruct w/ AgentTrek.\\n\\n| **Model** | **Miniwob++ Score** |\\n|----------------------------------|--------------------|\\n| **CodeLlama-7B-Instruct**[1] | 23.04 |\\n| **LLaMA3-chat-8B**[1] | 31.74 |\\n| **Qwen2.5-7B-Instruct** | 30.19 |\\n| **LLaMA3-chat-70B**[1] | 48.70 |\\n| **GPT-4**[1] | 53.04 |\\n| **Synatra-CodeLlama-7B**[1] | 38.20 |\\n| **Qwen2.5-7B-Instruct w/ AgentTrek** | **45.28** |\\n\\n[1] Synatra: Turning Indirect Knowledge into Direct Demonstrations for Digital Agents at Scale, Ou et al., 2024\", \"we_found_that\": \"1. **Textual Effectiveness**: AgentTrek\\u2019s textual trajectories significantly boost performance, surpassing open-source baselines and nearing GPT-4o.\\n2. **OOD Generalization**: Strong results on WebArena confirm that AgentTrek\\u2019s data generalizes well to unseen domains.\\n\\nWhile this shows the value of textual data, we recognize the importance of further quantifying contributions from other modalities (e.g. videos) and plan to explore this in future work. Thank you for your suggestion!\\n\\n\\n\\n> **W2: Expand Evaluation Benchmarks**\"}",
"{\"comment\": \"Thank you! We sincerely appreciate your thoughtful feedback and recognition of our work, particularly your kind acknowledgment of our use of bootstrapping to enhance open-source models and the provision of results for OOD environments. Your encouragement is highly motivating and reinforces our commitment to advancing this line of research.\\n\\n---\\n\\n> **My initial concerns were also related to tasks requiring precise control\\u2014specifically, coordinate-level control to execute actions effectively (it is very common and the tutorial may can not make effect for the fine-grained action). Given the time constraints for conducting additional experiments, do you have any thoughts or insights on addressing this aspect?**\", \"a\": \"Thank you for your insightful question! We are delighted to discuss this further to address your concerns.\\n\\n- **Textual-Based Action Space and Its Limitations on Coordinate-level Control:** Our guided replaying process is based on a textual web agent. During replay, the web agent observes the current textual accessibility tree, where each interactable element is labeled with a unique element ID. The agent predicts the next action by generating action with selecting the appropriate element ID, guided by the step-by-step tutorial. This framework primarily operates within a textual action space and does not involve pixel-level fine-grained control.\\nWhile this approach is sufficient for most daily web navigation tasks, it is indeed limited for vision-dependent, pixel-level actions, such as drawing or photo editing, which require generating precise coordinates. These more complex tasks present significant challenges to the AgentTrek framework and highlight an area for further exploration.\\n\\n- **Vision-Based Trajectories and Coordinate-Level Control:** Despite these limitations, our comprehensive multi-modal format of AgentTrek recording captures all relevant multimodal information during the textual guided replay process, including:\\n 1. Screenshots of the interface.\\n 2. Bounding boxes of the target elements and corresponding actions.\\n\\n These recordings result in vision-based trajectories consisting of screenshot observations paired with coordinate-level action annotations. Such trajectories enable models to learn and execute coordinate-level actions effectively.\\n \\n In our Multimodal-Mind2Web (Table 6) results, we successfully demonstrated the utility of these vision-based trajectories by building a VLM-based GUI agent. This vision-based agent can handle tasks by taking precise, coordinate-level control, showing that AgentTrek\\u2019s text-based replay trajectories, when augmented with multimodal recordings, facilitate the transfer of textual agent capabilities to vision-based agents for fine-grained coordinate-level control.\\n\\n- **Scalability and Future Directions:** This approach can also integrates with the bootstrapping training framework we previously discussed. By using bootstrapping to produce more AgentTrek\\u2019s multimodal data, we can transfer textual agent capabilities to vision-based agents in scale, progressively improving their ability to perform coordinate-level tasks. This iterative approach has the potential to enhance quality of trajectories and expand the scope of tasks that AgentTrek can address.\\n\\nWe hope this addresses your question and provides clarity on our current capabilities and future potential. Thank you once again for your valuable feedback and for pushing us to refine our approach further!\"}"
]
} |
EEbRrNsiiD | MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases | [
"Rithesh R N",
"Liangwei Yang",
"Juntao Tan",
"Tulika Manoj Awalgaonkar",
"Yilun Zhou",
"Shelby Heinecke",
"Sachin Desai",
"Chien-Sheng Wu",
"Ran Xu",
"Sarah Tan",
"Jianguo Zhang",
"Zhiwei Liu",
"Shirley Kokane",
"Zuxin Liu",
"Ming Zhu",
"Huan Wang",
"Caiming Xiong",
"Silvio Savarese"
] | The deployment of Large Language Models (LLMs) and Large Multimodal Models (LMMs) on mobile devices has gained significant attention due to the benefits of enhanced privacy, stability, and personalization. However, the hardware constraints of mobile devices necessitate the use of models with fewer parameters and model compression techniques like quantization. Currently, there is limited understanding of quantization's impact on various task performances, including LLM tasks, LMM tasks, and, critically, trust and safety. There is a lack of adequate tools for systematically testing these models on mobile devices. To address these gaps, we introduce MobileAIBench, a comprehensive benchmarking framework for evaluating mobile-optimized LLMs and LMMs. MobileAIBench assesses models across different sizes, quantization levels, and tasks, measuring latency and resource consumption on real devices. Our two-part open-source framework includes a library for running evaluations on desktops and a mobile app for on-device latency and hardware utilization measurements. Our thorough analysis aims to accelerate mobile AI research and deployment by providing insights into the performance and feasibility of deploying LLMs and LMMs on mobile platforms. | [
"Large Language Model",
"Mobile",
"Benchmarking"
] | Reject | https://openreview.net/pdf?id=EEbRrNsiiD | https://openreview.net/forum?id=EEbRrNsiiD | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"upvGJLFvo6",
"udBbluTei5",
"mv7WoA7212",
"litbOfFPAY",
"eMTuhEVwOM",
"d7tts7r00v",
"aRNJAQ5WKM",
"a03Sj2zWUI",
"YI86nILw2c",
"WV24118m5b",
"UMGfQWfQKu",
"TTJ7Q0MGPY",
"JklpykTQQY",
"Ip5hYKXIXH",
"GDARTALC8t",
"Ej0qqfty6B",
"AnphvXkZ8j",
"5fZUb1EycS",
"4PttRA2e8l"
],
"note_type": [
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1732585235789,
1732585183637,
1734487080771,
1733198304416,
1730641526471,
1730685708661,
1732874164830,
1732615826806,
1732595095134,
1737523894498,
1732590362106,
1732588708606,
1732587340396,
1730909442685,
1732595080975,
1732585399819,
1732589846250,
1730741120047,
1732589213608
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8210/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8210/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8210/Area_Chair_oVfE"
],
[
"ICLR.cc/2025/Conference/Submission8210/Reviewer_NJfu"
],
[
"ICLR.cc/2025/Conference/Submission8210/Reviewer_NJfu"
],
[
"ICLR.cc/2025/Conference/Submission8210/Reviewer_1eAF"
],
[
"ICLR.cc/2025/Conference/Submission8210/Reviewer_1eAF"
],
[
"ICLR.cc/2025/Conference/Submission8210/Reviewer_QrG4"
],
[
"ICLR.cc/2025/Conference/Submission8210/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8210/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8210/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8210/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8210/Reviewer_QrG4"
],
[
"ICLR.cc/2025/Conference/Submission8210/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8210/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8210/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8210/Reviewer_wj2K"
],
[
"ICLR.cc/2025/Conference/Submission8210/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"## Weaknesses (Continued):\\n\\n> \\\"Extensive experiments with MobileAIBench reveal several interesting findings. Quantization is an effective way to decrease model size while keeping the performance.\\\". But, isn't that expected? Why it is presented as the first \\\"interesting finding\\\"?\\n\\nThe primary intention was not to present this as a surprising result but rather to set the stage for the detailed findings that follow. Specifically, we aim to highlight how model performance varies across different levels of quantization and emphasize the varied sensitivities of different models and tasks to these levels. The statement \\\"Quantization is an effective way to decrease model size while keeping the performance\\\" provides foundational context, allowing readers to better appreciate the nuanced observations and insights discussed in the paragraph.\\n\\n> \\\"we found 7B to be the upper limit of what a high-end phone\\u2019s hardware can manage (even after quantization)\\\" 7B is not the generic upper limit, it is only the limit for the particular device. Also, I am not sure iPhone 14 can be considered as a high-end phone, without mentioning the Pro model that has more RAM memory and can possibly fit larger models.\\n\\nThe 7B upper limit applies to the specific configuration of the iPhone 14 (non-Pro). We will clarify this in the manuscript and add a note about the potential of higher-capacity devices (e.g., iPhone 14 Pro) for handling larger models.\\n\\n> Table 1 and 2: What is bold and what is underlined numbers?\\n\\nAs stated in Section 4.1.1., The highest score for each quantization category is indicated in bold, while the second-best score is underlined.\"}",
"{\"comment\": \"We appreciate the detailed feedback and the time taken to review our submission. Below, we address the primary concerns and suggestions.\\n\\n## Weaknesses:\\n\\n> - Missing important related works:\\n> - MELTing point: Mobile Evaluation of Language Transformers, from Laskaridis et al.\\n> - Small Language Models: Survey, Measurements, and Insights, from Zhenyan Lu et al.\\n>\\n> Authors are claiming that this is the first work \\\"to provide a thorough benchmarking and analysis of open source LLMs\\\". I suggest they reduce such strong claims.\\n\\nWe appreciate the reviewer pointing out important related works we missed. We will add citations to the above works. We will rephrase the claim about being \\\"the first work\\\" to avoid any overstatement, as the focus of our contribution lies in the comprehensiveness and usability of MobileAIBench for on-device LLM/LMM evaluation.\\n\\n> I would expect from a paper like this to list (if not include in the evaluation) the available on-device frameworks instead of sticking to llamacpp. For instance: MLC LLM, MediaPipe from Google, PyTorch ExecuTorch, Apple MLX should be mentioned.\\n\\nWe understand the importance of discussing frameworks such as MLC LLM, MediaPipe, PyTorch ExecuTorch, and Apple MLX. Our benchmarking pipeline is built on llama.cpp due to its widespread adoption and support for quantized models across various architectures. However, we will mention these alternatives in the discussion section and clarify that our framework could be extended to integrate additional engines in future iterations.\\n\\n> Details about the followed methodology are missing. How did the authors run the evaluation tasks on-device? Did they automate the process? Did they repeat the process multiple times? Did they reboot the device per task? Did they close the app and wait until the phone (that can easily get very hot and CPU get throttled) cools down?\\n\\nThe process of running benchmarks on-device was conducted manually and not automated. Each experiment, defined as running a single task on a specific model, was performed independently. After completing one experiment, we ensured a cooling-off period of at least 10 minutes before initiating the next experiment. This approach was implemented to mitigate potential thermal effects and ensure consistency in the results.\\n\\n> Considering that Std. is missing from the results reported in Table 1 and 2, I assume the authors did not repeat the experiment multiple times, and only reported performance for one run. This is very limited, performance can vary based on device state.\\n\\nThe experiments were conducted with a fixed random seed and a sampling temperature set to 0 to promote deterministic outputs. While we acknowledge the potential for some non-deterministic behavior due to factors like floating-point arithmetic, multi-threading, and model variability, this approach was chosen as a practical compromise between computational feasibility and reproducibility. We recognize that running each experiment multiple times across all datasets and models could provide a more comprehensive view of performance variability, but given the significant resource requirements, we opted for this methodology to balance practical constraints with meaningful insights.\\n\\n> Performance evaluations were only executed on CPU. It is possible to enable GPU support on llamacpp (through Metal in iOS) and performance should increase. Measuring Android would also be good (though not necessary), considering that you have an app ready.\\n\\nFor the benchmarking, we utilized GPU layers on the iPhone to accelerate the tasks.\"}",
"{\"metareview\": \"**summary**\\n\\nThe paper introduces MobileAIBench, a comprehensive benchmarking framework for evaluating the performance, efficiency, and deployability of LLMs and LMMs on mobile devices. It features tools for desktop and iOS platforms to test quantized models across various standard tasks, including NLP, multimodal, and trust and safety benchmarks. By analyzing the effects of quantization on model performance and resource usage, the study highlights key challenges and trade-offs in deploying LLMs and LMMs under constrained mobile hardware. The findings offer actionable insights for optimizing mobile AI applications and advancing quantization techniques. \\n\\n---\\n\\n**strengths**\\n\\n* Real-world experiments: By testing on real devices (e.g., iPhone 14), the paper measures key metrics like latency, hardware resource usage, CPU utilization, and battery consumption, providing realistic and practical insights.\\n* Diverse evaluation metrics: the paper employs extensive evaluation metrics beyond standard performance, including trust and safety, qualitative assessments, and diverse NLP/LMM tasks\\n* Clear and high-quality writing\\n\\n---\\n\\n**weaknesses**\\n\\n* Evaluation scope: Experiments were conducted only on the iPhone 14, excluding evaluations on newer and more diverse devices. Testing only on a single device misses opportunities to explore the performance of models on more diverse hardware, reducing the breadth of insights for on-device AI applications. Also, the lack of testing on hardware optimized for on-device AI (e.g., Snapdragon 8 Gen 3) limits the study's comprehensiveness and generalizability across varied hardware conditions.\\n* Focus on CPU-only evaluation: The evaluation is restricted to CPU performance, excluding GPU or other mobile AI accelerators, which are critical components in many modern devices optimized for AI tasks.\\n\\n---\\n\\n**decision**\\n\\nAll reviewers think that the overall direction of this paper is promising. However, they also raised concerns about the details and limited evaluation (cpu-only, single device, and so on; see [weaknesses]). During the discussion phase, concerns were not fully addressed. As a result, rejection is recommended.\", \"additional_comments_on_reviewer_discussion\": \"The authors responded to the reviewers' concerns, but these responses were not reflected in the draft, and most of the concerns raised by the reviewers remain unresolved. Therefore, I believe the revisions are insufficient to change the negative opinion.\"}",
"{\"comment\": \"Thank the authors for providing the detailed responses to the concerns and questions. After reading the others' comments as well, I would lean to rate more positive.\"}",
"{\"summary\": \"This paper proposes MobileAIBench, a platform for evaluating the performance of large language models (LLMs) and multimodal models (LMMs) on mobile devices. MobileAIBench focuses on the task performance, resource consumption, and trust and security of quantized models on mobile devices. The authors have built testing tools for desktop and mobile platforms and explored the impacts of different quantization levels on task effectiveness and resource utilization through experiments. The platform has reference value in the mobile AI application scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**Comprehensive mobile AI testing platform**: MobileAIBench integrates the existing benchmark testing framework for tasks and models. It is suitable for performance evaluation on end devices and fills the gap in mobile large model evaluation.\\n\\n**Multi-dimensional performance evaluation**: The platform not only tests the performance of models on standard NLP and multimodal tasks, but also covers trust and security dimensions, highlighting privacy protection, bias, and ethical issues in mobile deployment.\\n\\n**Real-device testing**: MobileAIBench tests LLMs directly on mobile devices with key metrics such as latency, memory usage, CPU utilization, and battery consumption, which makes the results closer to actual application scenarios.\", \"weaknesses\": \"**Insufficient mobile-side experiments**: The focus of MobileAIBench should be on deploying LLMs and LMMs on mobile devices and examining the effects of quantization on task performance in these environments. However, most of the experiment results are from desktops rather than mobile devices. Supplementing with more mobile-side experiments, such as assessing the impacts of quantization strategies on mobile devices, would strengthen the work.\\n\\n**Lack of data-level innovation**: MobileAIBench seems to be a collection of existing tasks and datasets without introducing specific data or design for benchmarking LLMs in mobile scenarios. As a benchmark, it lacks specialized datasets or test case designs tailored to mobile-specific scenarios, which would better demonstrate the platform\\u2019s value. Thus, it may be more suitable to position MobileAIBench as a platform or testing tool rather than a standalone benchmark.\\n\\n**Claims of device consistency require support**: In line 235, the authors assert that test results on desktop devices are consistent with those on mobile devices. However, the paper does not provide sufficient experimental data to support this claim.\\n\\n**Room for improvement in experimental design**: Although the potential impacts of device temperature is mentioned (line 462), temperature should be included as a sub-metric in the evaluation metrics to better reflect real-world conditions on mobile devices. I still recommend to test on more mobile devices to obtain more persuasive results.\", \"questions\": \"Can the authors provide more discussions and justifications for the above?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"A new benchmark is introduced to evaluate the behavior of LLMs and LMMs across various quantization levels, simulating deployment scenarios on mobile devices. By utilizing standard NLP, VQA, and safety tasks, the authors provide benchmarking references that offer insights into how model performance varies with quantization. The study highlights both the effectiveness and limitations of current quantization methods for LLMs. The limitation includes, for example, the need to address efficiency challenges in LMMs. The experimental findings underscore differences in performance across quantization levels, providing valuable information for developing more efficient algorithms.\\n\\nThe key contributions of this work are a novel benchmark regarding the quantization of LLM/LMMs, an open-source platform running on real devices, and in-depth analyses of the current quantization method and models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"S1 - This work makes a valuable contribution by expanding the community's understanding of models\\u2019 behaviors with quantization. This offers several analyses that will be beneficial for further research in this area, especially when deploying the model on mobile devices.\\n\\nS2 - The open-sourced experimental platform is highly meaningful, allowing others to reproduce the work easily. \\n\\nS3 - For analyses, the authors have taken a comprehensive approach by considering various evaluation axes. For example, the studies on multimodal and safety tasks enhance the study\\u2019s relevance and depth.\", \"weaknesses\": \"W1 - The current set of tasks can be limited. With the growing interest in UI-based control for digital devices (such as Cluade-3.5 for computer use), it would be beneficial to include related tasks. Have the authors considered incorporating AndroidWorld (Rawles et al., 2024) for general capability assessment or MobileSafetyBench (Lee et al., 2024) for evaluating the safety of agents controlling mobile devices?\\n\\nW2 - Relying solely on VQA for multimodal tasks may restrict the scope of analysis. Including other tasks, such as image captioning or OCR, could provide a more comprehensive evaluation of capabilities, especially considering their usage on mobile devices.\\n\\nW3 - Although the authors\\u2019 choice of the iPhone-14 as a representative device is understandable, it would enhance the robustness of the study to consider other device types. For example, assessment with Android OS devices or tablets would provide a broader understanding.\\n\\nW4 - (Minor) Certain aspects of the presentation could be improved. For example, the explanation of Figure 5 could be more detailed, and Figure 7 appears to be oddly rendered.\", \"questions\": \"Q1 - Could the authors clarify how this benchmark compares with existing ones regarding LLM quantization, such as LLM-QBench (Gong et al., 2024)? This comparison would help readers understand the unique contributions and positioning of this benchmark.\\n\\nQ2 - What was the rationale behind selecting a random sample of 1,000 from the dataset? Justification of this choice, particularly regarding the representativeness and generalizability of the results, would be valuable.\\n\\nQ3 - Could the authors explain why Llama-3.1-8B was not included in the experiment, considering it is only a 1B difference from the 7B models? Additionally, would the authors consider running supplementary experiments with the Llama-3.2 series (but I agree that adding these results may be infeasible, especially given its recent release) for offering valuable insights?\\n\\nQ4 - In Figure 5, could the authors specify the meaning of the numbers on the y-axis? This clarification would aid in interpreting the results more accurately.\\n\\nQ5 - Regarding Figure 5(b), it would be helpful if the authors could expand on this section in the main text, as the varying effects of quantization across task types could offer valuable insights.\\n\\nQ6 - Could the authors provide possible explanations for why Moondream2 exhibited strong performance in 3-bit quantization, while other models did not achieve similar results?\\n\\nQ7 - (Minor) In Appendix A.4, it appears that some of the table formats, particularly in Table 7, were rendered weirdly. The authors may want to review the formatting to ensure readability.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I appreciate the detailed explanations and the sharing of plans for further experiments. I believe this work is valuable to the community and will maintain my positive perspective, keeping my score.\"}",
"{\"comment\": \"Thank you for your detailed rebuttal and the clarifications you provided. I\\u2019ve thoroughly reviewed your responses. While I appreciate the efforts made to address the concerns raised, I still believe that the work doesn\\u2019t fully meet the high standards expected for acceptance at ICLR, and therefore, I cannot change my scores.\\n\\nAs mentioned in my original comments, I do find the topic exciting and timely, and encourage you to continue working on this. I suggest doing a more systematic evaluation (possibly automatically rather than manually), including more devices, and repeating the tests multiple times to provide an Std. I personally do not agree with your comment that choosing a random seed and setting temperature will make the experiments deterministic (system-wise). There are various (system-related) factors that performance will be different across multiple runs.\"}",
"{\"comment\": \"> Claims of device consistency require support: In line 235, the authors assert that test results on desktop devices are consistent with those on mobile devices. However, the paper does not provide sufficient experimental data to support this claim.\\n\\nThank you for highlighting this important point about our consistency claim. The assertion in line 235 about consistency between desktop and mobile device results was based on preliminary observations during our experiments, specifically regarding the relative performance rankings and comparative trends between models under different quantization levels. However, we acknowledge that this claim requires more robust supporting evidence and detailed experimental validation. Currently, we can only verify this consistency for models under 3B parameters that can actually run on mobile devices (like Phi2, Gemma 2B, and TinyLlama 1B), as shown in Tables 4 and 5. \\n\\n> Room for improvement in experimental design: Although the potential impacts of device temperature is mentioned (line 462), temperature should be included as a sub-metric in the evaluation metrics to better reflect real-world conditions on mobile devices. I still recommend to test on more mobile devices to obtain more persuasive results.\\n\\nWe appreciate the reviewer's valuable feedback regarding temperature considerations and device diversity in our experimental design. We agree that device temperature is a critical factor in mobile AI deployment that deserves more thorough investigation. While we observed temperature impacts on efficiency metrics during our experiments (particularly noticeable in the decreased performance with increasing sample numbers for LMMs, as noted in Section 4.3), we acknowledge that a more systematic approach to temperature measurement and analysis would strengthen our evaluation framework. We plan to enhance MobileAIBench by incorporating temperature as a formal sub-metric, including continuous temperature monitoring during model inference, analysis of thermal throttling impacts on performance, and examination of the relationship between model size, quantization level, and heat generation. Regarding device diversity, our current results from the iPhone 14 provide important initial insights, but we agree that testing across a broader range of devices with varying hardware capabilities and thermal characteristics would provide more comprehensive and generalizable results. In future work, we plan to expand our evaluation to include different iPhone models, Android devices across various price points and hardware configurations, and tablets, which would help establish more robust benchmarking standards for mobile AI deployment. This expanded device coverage would also help identify how different mobile hardware architectures and thermal management systems impact model performance.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"## Questions Continued:\\n\\n> Q6 - Could the authors provide possible explanations for why Moondream2 exhibited strong performance in 3-bit quantization, while other models did not achieve similar results?\\n\\nThank you for asking. First, we would like to clarify that we are not claiming a general performance comparison for the selected models when serving on PyTorch or VLLM. Instead, we are specifically testing their performance when served on LlamaCPP. This setup requires manually building the computational graph using the GGML library, which is essential for further deployment on mobile devices. We did not implement any of the models on the GGML backend ourselves; rather, we tested only those with existing GGUF files released, using the same data processing, inference, and testing strategy across all models. We also performed a post-check on the logs to ensure that the models successfully generated answers and did not fail to generate a response, which could lead to incorrect answers.\\nGiven this context, Moondream2 is not the only model with acceptable performance under 3-bit quantization. For example, LLaVA-v1.6 also demonstrates good performance under 3-bit quantization. However, due to the large size of its image tokens, inference speed is extremely slow, even on a computer CPU (approximately one minute to generate the first token). This makes the model unsuitable for deployment on any current mobile device. At the time of testing, Moondream2 appeared to perform significantly better than other low-latency models under 3-bit quantization. However, for some recently released models supporting LlamaCPP, such as MiniCPMv2.6, also show promising performance under 3-bit quantization (initial test results are attached). We will update the table as new small models that support LlamaCPP become available in the future.\\n\\n\\n\\n| Quantization | Model | Model Size | GQA | TextVQA |\\n|:------------:|:-----:|:---------:|:---:|:-------:|\\n| 3-bit | Moondream2 | 1B - 6B | 0.565 | 0.381 |\\n| 3-bit | Minicpmv-2.6 | > 6B | 0.462 | 0.654 |\\n\\n\\nTo better understand why LLaVA-v1.5 performs poorly under 3-bit quantization, one possible explanation is that it tends to generate \\\"1\\\" and \\\"0\\\" when it is expected to generate \\\"yes\\\" and \\\"no.\\\" Attached below are logs comparing predictions from LLaVA-v1.5 under 4-bit and 3-bit quantization for the same questions. Despite this issue, the 3-bit model is still capable of generating English word answers in certain test cases.\", \"4_bit_quantization\": \"\\\\\", \"question_id\": \"1584001, Prediction: London, GT: london\\n\\nAs on-device LLM serving is a new and rapidly evolving topic, the engines we used are also under active development. We acknowledge the possibility of implementation issues affecting certain models. We will closely monitor updates to LlamaCPP and related engines. If we become aware of any potential issues, we will improve the MobileAIbench codebase and update it accordingly.\\n\\n> Q7 - (Minor) In Appendix A.4, it appears that some of the table formats, particularly in Table 7, were rendered weirdly. The authors may want to review the formatting to ensure readability.\\n\\nThank you for pointing this out. We will ensure all tables are properly formatted and readable in the final version.\", \"3_bit_quantization\": \"\\\\\"}",
"{\"comment\": \"## Detailed comments:\\n\\n> a) The paper shows that 3-bit quantization significantly reduces accuracy without lowering inference latency. This could be further analyzed, as extreme quantization may introduce computational complexities that offset latency benefits.\\n\\nThe observed phenomenon where 3-bit quantization reduces accuracy without improving inference latency warrants deeper analysis and can be attributed to several technical factors. First, 3-bit operations require additional computational overhead for dequantization during inference, as modern mobile processors are not optimized for 3-bit arithmetic. Second, the non-standard memory alignment patterns created by 3-bit quantization can lead to inefficient memory access and cache utilization, potentially offsetting any theoretical benefits from reduced model size. This is particularly relevant on mobile hardware, where memory access patterns significantly impact performance. We acknowledge that our analysis could be strengthened by including hardware-level profiling (cache miss rates, memory bandwidth utilization) and operation-wise breakdowns to better understand these tradeoffs. Future work could explore hybrid quantization approaches or hardware-specific optimizations to better leverage extreme quantization while maintaining both accuracy and performance benefits.\\n\\n> b) The study only reports CPU results, but GPUs/XPUs are crucial for mobile AI tasks. Testing on these processors could reveal performance differences across hardware types, providing a fuller picture of deployment on mobile hardware.\\n\\nFor the benchmarking, we utilized GPU layers on the iPhone to accelerate the tasks. This was done by setting the value of `n_gpu_layers ` in llama.cpp to 999\\n\\n> c) Despite Phi2\\u2019s larger model size, it has lower CPU utilization and faster inference than Gemma. Investigating Phi2\\u2019s architectural or parallelization optimizations could reveal design principles for high efficiency in on-device deployments.\\n\\nPhi2's superior efficiency despite its larger size reveals important insights about mobile model design. The performance advantage likely stems from several key architectural decisions: (1) Phi2's use of grouped-query attention (GQA) reduces computational complexity while maintaining model capacity, (2) its flash attention implementation enables more efficient memory access patterns, resulting in better cache utilization. Additionally, Phi2's design incorporates parallel-friendly components that better utilize mobile hardware capabilities. This suggests that raw parameter count may be less important than architectural choices for mobile deployment. Future mobile-optimized models should prioritize such hardware-aware design principles, focusing on efficient attention mechanisms, optimized memory access patterns, and architectures that leverage mobile hardware parallelization capabilities. \\n\\n> d) Besides the mobile side, it is necessary to consider mobile-cloud-edge cooperation ways for better energy efficiency, e.g., Gearing Resource-Poor Mobile Devices with Powerful Clouds: Architecture, Challenges and Applications, iwc\\u201913; TrimCaching: Parameter-sharing AI Model Caching in Wireless Edge Networks, icdcs\\u201924, etc.\\n\\nWhile mobile-cloud-edge cooperation could improve energy efficiency, our focus on pure on-device evaluation addresses critical real-world requirements. First, many applications demand consistent availability regardless of network conditions, such as offline language translation or emergency response systems. Second, time-sensitive applications like real-time speech recognition or AR/VR interactions require ultra-low latency that cloud round-trips cannot guarantee. Third, privacy-critical applications handling sensitive data (healthcare, financial, personal communications) often cannot risk data transmission to external servers due to regulatory or security requirements. Our benchmark's emphasis on on-device performance provides valuable insights for these essential use cases where cloud offloading is not viable. Nevertheless, we acknowledge that future extensions of MobileAIBench could include optional cloud-edge cooperation scenarios to provide a complete picture of deployment options when network connectivity and privacy requirements permit.\"}",
"{\"comment\": \"We appreciate the detailed feedback and the time taken to review our submission. Below, we address the primary concerns and suggestions:\\n\\n## Weaknesses / Questions:\\n\\n> a) The experiments were conducted only on the iPhone 14, lacking evaluations on newer and more diverse devices. Currently, there are more mobile devices optimized specifically for on-device AI, such as the Snapdragon 8 Gen 3. Including these devices in testing would provide a more comprehensive view of model performance under different hardware conditions, offering broader insights for on-device AI applications.\\n\\nWe acknowledge that evaluating on a broader range of devices, including AI-optimized hardware like Snapdragon 8 Gen 3, would enhance the comprehensiveness of our study. However, this paper primarily focuses on assessing the feasibility of deploying language models on edge devices, such as mobile phones. Our aim was to explore how different Small Language Models (SLMs) behave under the same hardware specifications, their varied sensitivities to quantization levels, and how reliable they remain post-quantization, particularly concerning trust and safety considerations. The iPhone 14 served as a baseline due to its widespread usage and accessibility. We have since extended support to Android devices and recognize the importance of evaluating performance on newer and more diverse platforms. Expanding these experiments will be a key priority in future work to provide broader insights for on-device AI applications.\\n\\n\\n> b) In the section 4.3, the number of models tested is limited, failing to cover a wider variety of model architectures and parameter sizes. This limitation restricts a comprehensive understanding of how different models perform on mobile devices. Expanding the variety and scale of tested models would make the evaluation results more representative and valuable.\\n\\nWe acknowledge the reviewer's point about the limited model coverage in our efficiency and utilization evaluation. The current selection of three LLMs (Phi2 3B, Gemma 2B, TinyLlama 1B) and one LMM (Llava-Phi-2) was primarily constrained by the current hardware limitations of the iPhone 14 platform, particularly its 6 GiB RAM constraint. However, we agree that a more comprehensive evaluation would be valuable. In future work, we plan to: (1) include emerging mobile-optimized architectures such as Phi-3, newer versions of Mobile-VLM, and other efficiency-focused models, (2) evaluate models across different mobile hardware platforms with varying computational capabilities and memory constraints, and (3) analyze different architectural choices like attention mechanisms, embedding dimensions, and depth-width trade-offs that specifically impact mobile performance.\\n\\n> c) Although basic metrics such as performance, latency, and resource usage are provided, there is insufficient exploration of underlying reasons and optimization strategies. A more in-depth analysis would help us better understand the impact of different quantization levels and model architectures on task performance, offering valuable guidance for future research and practical deployment.\\n\\nWe appreciate the suggestion to explore the impact of quantization levels and model architectures on performance and optimization strategies in greater depth. As a benchmarking paper, our primary objective is to establish well-balanced and effective datasets and metrics for evaluating various LLMs, with a focus on practical deployment considerations for mobile devices. To this end, we conducted experiments on quantization and its effects across different tasks and models, offering valuable insights into their feasibility for on-device use. However, understanding why certain models are faster than others and why some require fewer resources is a broader and more general research question in the field of LLMs and is undoubtedly a compelling direction for future research. In subsequent iterations, we plan to include more detailed analyses of quantization-induced computational overheads, architectural optimizations, and their correlations with task performance.\"}",
"{\"summary\": \"This paper presents a benchmarking infrastructure for measuring on-device LLMs and LMMs in mobile deployments. The system report a wide set of evaluation metrics, including quality benchmarks for standard NLP tasks, multimodal tasks, and trust/safety. The authors evaluated a series of open models on an iPhone 14 device and provided insights on the deployability of those models on mobile platforms.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Exciting new topic. Running LLMs (and possibly LMMs) on-device is important for enhanced privacy, and, under certain conditions, it provides enhanced performance and UX.\", \"I appreciate the extensive evaluation metrics, on top of the standard performance utilization: trust and safety, but also the qualitative metrics under a wide range of NLP/LMM tasks.\", \"Good quality of writing, figures etc. I do have a few suggestions for further improvements, mentioned next.\"], \"weaknesses\": [\"Missing important related works:\", \"MELTing point: Mobile Evaluation of Language Transformers, from Laskaridis et al.\", \"Small Language Models: Survey, Measurements, and Insights, from Zhenyan Lu et al.\", \"Authors are claiming that this is the first work \\\"to provide a thorough benchmarking and analysis of open source LLMs\\\". I suggest they reduce such strong claims.\", \"I would expect from a paper like this to list (if not include in the evaluation) the available on-device frameworks instead of sticking to llamacpp. For instance: MLC LLM, MediaPipe from Google, PyTorch ExecuTorch, Apple MLX should be mentioned.\", \"Details about the followed methodology are missing. How did the authors run the evaluation tasks on-device? Did they automate the process? Did they repeat the process multiple times? Did they reboot the device per task? Did they close the app and wait until the phone (that can easily get very hot and CPU get throttled) cools down?\", \"Considering that Std. is missing from the results reported in Table 1 and 2, I assume the authors did not repeat the experiment multiple times, and only reported performance for one run. This is very limited, performance can vary based on device state.\", \"Authors have only tested the evaluation on a single device (iPhone 14), and using a single on-device inference engine (llamacpp).\", \"Performance evaluations were only executed on CPU. It is possible to enable GPU support on llamacpp (through Metal in iOS) and performance should increase. Measuring Android would also be good (though not necessary), considering that you have an app ready.\", \"Power performance evaluation is limited to Battery Drained Rate, instead of energy or discharge.\", \"\\\"Extensive experiments with MobileAIBench reveal several interesting findings. Quantization is an effective way to decrease model size while keeping the performance.\\\". But, isn't that expected? Why it is presented as the first \\\"interesting finding\\\"?\", \"\\\"we found 7B to be the upper limit of what a high-end phone\\u2019s hardware can manage (even after quantization)\\\" 7B is not the generic upper limit, it is only the limit for the particular device. Also, I am not sure iPhone 14 can be considered as a high-end phone, without mentioning the Pro model that has more RAM memory and can possibly fit larger models.\", \"While manuscript writing quality is good, I have a few comments/suggestions:\", \"Add a small summary explaining figures in its captions. There were moment that I had to move forward in order to understand them (e.g., Figure 1).\", \"Table 1 and 2: What is bold and what is underlined numbers?\", \"\\\"The first part is a pipeline for use on desktops or servers, to evaluate model performance on a specially selected set of widely known benchmarks.\\\". Such as? Otherwise it looks too generic.\"], \"questions\": [\"How did you access Battery Drained Rate, CPU usage, and memory while executing an experiment? Connecting the phone with a USB cable would interfere with the results. Were you connected over WiFi? This is an example of missing details of the methodology that was followed.\", \"Have you run the experiment multiple times? If yes, what was the Std?\", \"How easy/hard is it to update the on-device inference engine in your pipeline?\", \"Have you measured performance while running the models in GPU?\", \"Are you planning to release the code in open source?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"## Weaknesses / Questions:\\n> Insufficient mobile-side experiments: The focus of MobileAIBench should be on deploying LLMs and LMMs on mobile devices and examining the effects of quantization on task performance in these environments. However, most of the experiment results are from desktops rather than mobile devices. Supplementing with more mobile-side experiments, such as assessing the impacts of quantization strategies on mobile devices, would strengthen the work.\\n\\nWe appreciate the reviewer's feedback regarding the need for more mobile-side experiments. Our current work includes mobile-specific evaluations leveraging an iOS app to directly assess key efficiency and utilization metrics such as Time-to-First-Token (TTFT), Input Tokens Per Second (ITPS), CPU/RAM usage, and Battery Drain Rate (BDR) on real mobile devices. These experiments evaluate quantized models across various tasks, including standard NLP (HotpotQA, Databricks-Dolly, Sql-Create-Context, XSum) and multimodal tasks (VQA-v2, ScienceQA), providing an accurate understanding of model performance in mobile environments (Section 4.3, Tables 4 and 5). However, there were several important technical constraints that shaped our experimental design. Currently, only models under 3B parameters can be practically deployed on mobile devices even after quantization, due to the significant memory and computational limitations of current mobile hardware. As shown in our experiments on the iPhone 14, even running a 4-bit quantized TinyLlama model (1B parameters) consumes over 50% of the device's 6GB RAM.\\n\\nGiven these hardware constraints, we took a two-pronged approach: First, we conducted comprehensive quantization impact analysis on desktop/cloud to establish baseline performance impacts across model sizes and architectures. This allowed us to evaluate larger models (up to 7B parameters) and identify promising candidates for mobile deployment. Then, for models that could run on mobile devices (Phi2, Gemma 2B, and TinyLlama 1B), we performed detailed on-device experiments measuring critical metrics like time-to-first-token, CPU/RAM utilization, and battery drain across multiple tasks.This approach lets us provide both broad insights about quantization effects and specific mobile deployment metrics. \\n\\n> Lack of data-level innovation: MobileAIBench seems to be a collection of existing tasks and datasets without introducing specific data or design for benchmarking LLMs in mobile scenarios. As a benchmark, it lacks specialized datasets or test case designs tailored to mobile-specific scenarios, which would better demonstrate the platform\\u2019s value. Thus, it may be more suitable to position MobileAIBench as a platform or testing tool rather than a standalone benchmark.\\n\\nWhile it's true that we leverage existing datasets, this was an intentional design choice to ensure comparability with established benchmarks while adding crucial mobile-specific evaluation dimensions. Our framework's innovation lies not in creating new datasets, but in providing the first comprehensive tooling and methodology for evaluating LLMs and LMMs specifically for mobile deployment - a critical gap in current research infrastructure. The selection of existing datasets was carefully curated to represent real-world mobile use cases, spanning question-answering, summarization, visual understanding, and trust & safety evaluation. Using established datasets allows researchers to contextualize mobile performance against known baselines while our framework adds critical new mobile-specific metrics such as time-to-first-token, CPU/RAM utilization, and battery drain that are absent from existing benchmarks. However, we acknowledge the reviewer's point about mobile-specific scenarios and agree this represents an opportunity for future work.\"}",
"{\"comment\": \"## Questions:\\n\\n> How did you access Battery Drained Rate, CPU usage, and memory while executing an experiment? Connecting the phone with a USB cable would interfere with the results. Were you connected over WiFi? This is an example of missing details of the methodology that was followed.\\n\\nThe Battery Drain Rate (BDR) was calculated separately when the device was not connected to a USB cable. This approach ensured that the BDR measurement was not influenced by external factors such as power being supplied via the USB connection. For other utilization metrics, such as CPU and memory usage, we used Apple\\u2019s Instruments tool on a laptop, with the device connected via USB.\\n\\n> Have you run the experiment multiple times? If yes, what was the Std?\\n\\nThe experiments were conducted with a fixed random seed and a sampling temperature set to 0 to promote deterministic outputs. While we acknowledge the potential for some non-deterministic behavior due to factors like floating-point arithmetic, multi-threading, and model variability, this approach was chosen as a practical compromise between computational feasibility and reproducibility. We recognize that running each experiment multiple times across all datasets and models could provide a more comprehensive view of performance variability, but given the significant resource requirements, we opted for this methodology to balance practical constraints with meaningful insights.\\n\\n> How easy/hard is it to update the on-device inference engine in your pipeline?\\n\\nUpdating the on-device inference engine in our pipeline is straightforward. As described in the paper, the pipeline is designed with a plug-and-play architecture. Users can replace the llama.cpp API with any other inference engine API of their choice without significant modifications. Additionally, we have integrated HuggingFace APIs in the codebase to support users interested in running models on desktops or servers, providing flexibility for diverse deployment scenarios.\\n\\n> Have you measured performance while running the models in GPU?\\n\\nFor the benchmarking, we utilized GPU layers on the iPhone to accelerate the tasks. This was done by setting the value of `n_gpu_layers ` in llama.cpp to 999\\n\\n> Are you planning to release the code in open source?\\n\\nYes, the code has already been released. To maintain anonymity during the review process, we have provided a zip file containing the code instead of sharing the actual link to the open-source repository. This ensures compliance with the double-blind review guidelines while enabling reviewers to assess the implementation. By making the code publicly available, we aim to facilitate reproducibility, encourage contributions from the research community, and support further advancements in benchmarking and optimizing on-device LLMs and LMMs. This aligns with our commitment to fostering community-driven research and enabling broader adoption of MobileAIBench.\"}",
"{\"comment\": \"## Questions:\\n\\n> Q1 - Could the authors clarify how this benchmark compares with existing ones regarding LLM quantization, such as LLM-QBench (Gong et al., 2024)? This comparison would help readers understand the unique contributions and positioning of this benchmark.\\n\\nMobileAIBench differs from LLM-QBench and other quantization benchmarks by specifically focusing on mobile deployment considerations. While LLM-QBench evaluates quantization impacts primarily on model accuracy, our benchmark additionally measures critical mobile-specific metrics like battery drain, memory utilization, and inference latency on real devices. This end-to-end evaluation provides insights into practical deployment challenges that aren't captured by traditional quantization benchmarks. For example, our findings show that some quantization levels that perform well in standard benchmarks may not be optimal for mobile deployment due to hardware-specific constraints. This mobile-first approach complements existing quantization benchmarks by bridging the gap between theoretical performance and practical mobile deployment.\\n\\n> Q2 - What was the rationale behind selecting a random sample of 1,000 from the dataset? Justification of this choice, particularly regarding the representativeness and generalizability of the results, would be valuable.\\n\\nThe choice of 1,000 samples was primarily driven by practical mobile testing constraints. Mobile devices have limited computational resources and battery life, making it impractical to evaluate complete datasets while maintaining consistent testing conditions across multiple runs. This sample size allows us to complete comprehensive testing across multiple models and quantization levels without thermal throttling or battery depletion affecting results. We acknowledge that future work should include statistical significance tests and standard deviation analysis to validate that this sample size adequately represents the full dataset distributions.\\n\\n> Q3 - Could the authors explain why Llama-3.1-8B was not included in the experiment, considering it is only a 1B difference from the 7B models? Additionally, would the authors consider running supplementary experiments with the Llama-3.2 series (but I agree that adding these results may be infeasible, especially given its recent release) for offering valuable insights?\\n\\nThe 7B parameter limit was chosen based on our empirical testing of memory constraints on the iPhone 14's 6 GiB RAM. While Llama-3.1-8B is only marginally larger than 7B models, our preliminary tests showed that even with 4-bit quantization, it exceeded stable memory thresholds for reliable mobile inference. We focused on models that could run consistently without memory-related performance degradation. Regarding Llama-3.2, we agree that including it would provide valuable insights, particularly given its improved efficiency. We plan to evaluate newer model series, including Llama-3.2, in future updates of MobileAIBench as mobile hardware capabilities evolve and more efficient deployment techniques become available.\\n\\n> Q4 - In Figure 5, could the authors specify the meaning of the numbers on the y-axis? This clarification would aid in interpreting the results more accurately.\\n\\nIn Figure 5, the y-axis represents the delta in model performance metrics, specifically the performance difference (percentage points) between 16-bit and 8-bit quantization levels. Figure 5(a) depicts the distribution of changes in model performance when the underlying model is quantized from 16-bit to 8-bit, where positive values indicate performance improvement and negative values show degradation.\\n\\n> Q5 - Regarding Figure 5(b), it would be helpful if the authors could expand on this section in the main text, as the varying effects of quantization across task types could offer valuable insights.\\n\\nWe agree that the varying effects of quantization across different task types shown in Figure 5(b) deserve more thorough discussion. The distribution patterns reveal that some tasks (like MMLU and GSM8K) are more sensitive to quantization than others (such as HotpotQA), suggesting that task characteristics influence quantization robustness. This has important implications for mobile deployment decisions - practitioners might need to consider task-specific requirements when choosing quantization levels.\"}",
"{\"summary\": \"The paper conducts a benchmarking framework designed to evaluate the performance of large language models (LLMs) and large multimodal models (LMMs) on mobile devices, addressing the challenges of limited hardware resources. It consists of a desktop evaluation library and an iOS app, enabling comprehensive testing of quantized models across NLP, multimodal, and trust & safety tasks. MobileAIBench assesses models' efficiency, effectiveness, and resource utilization, providing insights into their feasibility for on-device deployment while supporting advancements in mobile AI research.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"a) Real-world experiments.\\n\\nb) By measuring latency and hardware resource usage on the iPhone 14, the study provides insights into the performance of smaller models.\\n\\nc) The paper introduces an open-source tool that facilitates convenient testing of small models.\\n\\nd) Writing is clear and fluency.\", \"weaknesses\": \"a) The experiments were conducted only on the iPhone 14, lacking evaluations on newer and more diverse devices. Currently, there are more mobile devices optimized specifically for on-device AI, such as the Snapdragon 8 Gen 3. Including these devices in testing would provide a more comprehensive view of model performance under different hardware conditions, offering broader insights for on-device AI applications.\\n\\nb) In the section 4.3, the number of models tested is limited, failing to cover a wider variety of model architectures and parameter sizes. This limitation restricts a comprehensive understanding of how different models perform on mobile devices. Expanding the variety and scale of tested models would make the evaluation results more representative and valuable.\\n\\nc) Although basic metrics such as performance, latency, and resource usage are provided, there is insufficient exploration of underlying reasons and optimization strategies. A more in-depth analysis would help us better understand the impact of different quantization levels and model architectures on task performance, offering valuable guidance for future research and practical deployment.\", \"detailed_comments\": \"a) The paper shows that 3-bit quantization significantly reduces accuracy without lowering inference latency. This could be further analyzed, as extreme quantization may introduce computational complexities that offset latency benefits. \\n\\nb) The study only reports CPU results, but GPUs/XPUs are crucial for mobile AI tasks. Testing on these processors could reveal performance differences across hardware types, providing a fuller picture of deployment on mobile hardware. \\n\\nc) Despite Phi2\\u2019s larger model size, it has lower CPU utilization and faster inference than Gemma. Investigating Phi2\\u2019s architectural or parallelization optimizations could reveal design principles for high efficiency in on-device deployments.\\n\\nd) Besides the mobile side, it is necessary to consider mobile-cloud-edge cooperation ways for better energy efficiency, e.g., Gearing Resource-Poor Mobile Devices with Powerful Clouds: Architecture, Challenges and Applications, iwc\\u201913; TrimCaching: Parameter-sharing AI Model Caching in Wireless Edge Networks, icdcs\\u201924, etc.\\n\\ne) Although the paper notes that more output tokens increase Battery Drain Rate (BDR), this relationship isn\\u2019t clearly shown in Table 4.\", \"questions\": \"a) The experiments were conducted only on the iPhone 14, lacking evaluations on newer and more diverse devices. Currently, there are more mobile devices optimized specifically for on-device AI, such as the Snapdragon 8 Gen 3. Including these devices in testing would provide a more comprehensive view of model performance under different hardware conditions, offering broader insights for on-device AI applications.\\n\\nb) In the section 4.3, the number of models tested is limited, failing to cover a wider variety of model architectures and parameter sizes. This limitation restricts a comprehensive understanding of how different models perform on mobile devices. Expanding the variety and scale of tested models would make the evaluation results more representative and valuable.\\n\\nc) Although basic metrics such as performance, latency, and resource usage are provided, there is insufficient exploration of underlying reasons and optimization strategies. A more in-depth analysis would help us better understand the impact of different quantization levels and model architectures on task performance, offering valuable guidance for future research and practical deployment.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We appreciate the detailed feedback and the time taken to review our submission. Below, we address the primary concerns and suggestions.\\n\\n## Weaknesses:\\n\\n> W1 - The current set of tasks can be limited. With the growing interest in UI-based control for digital devices (such as Cluade-3.5 for computer use), it would be beneficial to include related tasks. Have the authors considered incorporating AndroidWorld (Rawles et al., 2024) for general capability assessment or MobileSafetyBench (Lee et al., 2024) for evaluating the safety of agents controlling mobile devices?\\n\\nWe appreciate the valuable suggestion to incorporate UI-based control tasks. While our current benchmark focuses on fundamental NLP and vision tasks to establish baseline mobile performance metrics, we recognize that UI interaction represents an increasingly important use case, as demonstrated by models like Claude-3.5. Including benchmarks like AndroidWorld and MobileSafetyBench would enhance MobileAIBench by: (1) evaluating models' ability to understand and generate UI-related instructions, which is crucial for mobile assistants, (2) assessing safety considerations specific to device control, and (3) measuring performance on real-world mobile interaction scenarios. We plan to integrate these benchmarks in future versions to provide a more comprehensive assessment of models' capabilities in mobile device control scenarios, while maintaining our rigorous evaluation of computational efficiency and resource utilization that is essential for on-device deployment.\\n\\n> W2 - Relying solely on VQA for multimodal tasks may restrict the scope of analysis. Including other tasks, such as image captioning or OCR, could provide a more comprehensive evaluation of capabilities, especially considering their usage on mobile devices.\\n\\nWe agree that expanding beyond VQA would provide a more comprehensive evaluation of multimodal capabilities. While we chose VQA as an initial focus due to its well-established benchmarks and direct applicability to mobile scenarios, incorporating tasks like image captioning and OCR would better reflect real-world mobile use cases. In future versions of MobileAIBench, we plan to include these additional multimodal tasks while maintaining our detailed analysis of computational efficiency and resource utilization. This expansion will provide a more complete picture of how different multimodal capabilities impact mobile deployment considerations.\\n\\n> W3 - Although the authors\\u2019 choice of the iPhone-14 as a representative device is understandable, it would enhance the robustness of the study to consider other device types. For example, assessment with Android OS devices or tablets would provide a broader understanding.\\n\\nWhile we started with the iPhone-14 for practicality, we agree that including additional devices like Android phones and tablets would improve robustness. Our framework already supports Android devices, and we plan to expand testing in future iterations to address this limitation.\\n\\n> W4 - (Minor) Certain aspects of the presentation could be improved. For example, the explanation of Figure 5 could be more detailed, and Figure 7 appears to be oddly rendered.\\n\\nThank you for noting these presentation issues. We agree that Figure 5's explanation of performance changes during quantization could be more detailed, particularly in describing the violin plot distributions and their implications for model robustness. We will also fix the rendering issue in Figure 7 to ensure clear visualization of the results.\"}"
]
} |
EEWpE9cR27 | Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models | [
"Qin Liu",
"Chao Shang",
"Ling Liu",
"Nikolaos Pappas",
"Jie Ma",
"Neha Anna John",
"Srikanth Doss",
"Lluis Marquez",
"Miguel Ballesteros",
"Yassine Benajiba"
] | The safety alignment ability of Vision-Language Models (VLMs) is prone to be degraded by the integration of the vision module compared to its LLM backbone. We investigate this phenomenon, dubbed as “safety alignment degradation” in this paper, and show that the challenge arises from the representation gap that emerges when introducing vision modality to VLMs. In particular, we show that the representations of multi-modal inputs shift away from that of text-only inputs which represent the distribution that the LLM backbone is optimized for. At the same time, the safety alignment capabilities, initially developed within the textual embedding space, do not successfully transfer to this new multi-modal representation space. To reduce safety alignment degradation, we introduce Cross-Modality Representation Manipulation (CMRM), an inference time representation intervention method for recovering the safety alignment ability that is inherent in the LLM backbone of VLMs, while simultaneously preserving the functional capabilities of VLMs. The empirical results show that our framework significantly recovers the alignment ability that is inherited from the LLM backbone with minimal impact on the fluency and linguistic capabilities of pre-trained VLMs even without additional training. Specifically, the unsafe rate of LLaVA-7B on multi-modal input can be reduced from 61.53% to as low as 3.15% with only inference-time intervention. | [
"Safety Alignment",
"Multi-modality",
"AI Security"
] | Reject | https://openreview.net/pdf?id=EEWpE9cR27 | https://openreview.net/forum?id=EEWpE9cR27 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wRWRdVz14I",
"mHlvMeTmgn",
"fU8kaOvl03",
"adX0dBEqTg",
"XaOdHbFiRA",
"UUXJ3RpCOq",
"LRcd4DFlfR",
"8im9QIxC52",
"680bSoMWS6"
],
"note_type": [
"official_comment",
"meta_review",
"official_review",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1733212891050,
1734755091296,
1730789842904,
1730717669218,
1733214395164,
1737523567541,
1733126911548,
1729444574466,
1730562715524
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3288/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3288/Area_Chair_yf48"
],
[
"ICLR.cc/2025/Conference/Submission3288/Reviewer_8DSj"
],
[
"ICLR.cc/2025/Conference/Submission3288/Reviewer_UVRU"
],
[
"ICLR.cc/2025/Conference/Submission3288/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3288/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3288/Reviewer_ZbT2"
],
[
"ICLR.cc/2025/Conference/Submission3288/Reviewer_yyM3"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer UVRU\", \"comment\": \"We appreciate the reviewer's insightful feedback. We provide a detailed response below to address the concerns and questions raised by the reviewer.\\n\\n>**W1: Ideal distribution of MLLMs**\\n\\nThank you for your thoughtful comment and for highlighting an important perspective on the alignment of MLLMs. We acknowledge that the introduction of multi-modal data, additional parameters, and distinct learning techniques naturally shifts the representation distribution, aligning it more towards multi-modal functionalities rather than the original textual alignment of the LLM backbone. Our study does not argue that the ideal distribution of MLLMs should strictly adhere to the safely trained LLM backbone; rather, we propose that the inherent safety alignment mechanisms of the LLM backbone, which have been optimized for textual inputs, provide a strong foundation for mitigating safety risks in VLMs. This is particularly crucial as the incorporation of vision data has been shown to degrade safety alignment. Our methodology seeks to recalibrate multi-modal representations to a space where the LLM's intrinsic safety mechanisms can be effectively leveraged, without compromising the model\\u2019s multi-modal functionalities. While the safety regulations of the LLM may not apply directly to the shifted inputs, our empirical results demonstrate that partial recovery of safety alignment is achievable. We appreciate your observation about the need for broader evidence, and we agree that future work should investigate the balance between maintaining safety and fostering comprehensive VLM development, possibly through hybrid alignment strategies that integrate both safety and utility optimization.\\n\\n---\\n\\n>**W2: Utility Benchmarks**\\n\\nThank you for this constructive comment. We provide the results on MMMU as follows.\\n\\n| | MMMU-Overall |\\n|----------------|--------------|\\n| LLaVA-v1.5-7B | 33.90 |\\n| + CMRM | 33.00 |\\n| LLaVA-v1.5-13B | 36.30 |\\n| +CMRM | 35.80 |\\n\\nThese results are consistent with the observations reported in Table 1, where CMRM demonstrates minimal impact on utility performance while significantly improving safety alignment. Importantly, the marginal differences in utility scores highlight the effectiveness of CMRM in preserving the functional capabilities of the models across different benchmarks.\\nWe recognize the importance of extending utility evaluation to other comprehensive benchmarks, such as MM-Vet and MME, to further validate the robustness of our approach. As such, we are actively working on completing experiments across all settings outlined in Table 1 and will include these results in the revised version of the paper.\\n\\n---\\n\\n>**W3: Utility Benchmarks**\\n\\nThank you for this suggestion. We will clarify the details when introducing the baseline methods. According to the work of VLGuard [1], both VLGuard Mixed and VLGuard PH train VLMs on a set of safety data and update full parameters. We evaluate the safety performance of these baseline methods by directly testing the models released by [1].\\n\\n\\n[1] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models\\n\\n\\n---\\n\\n>**Q1: Recover safety alignment knowledge**\\n\\nThank you for your thoughtful comment regarding the relationship between safety alignment degradation and the potential forgetting of safety alignment knowledge in MLLMs (VLMs). Our study specifically investigates the safety alignment degradation in VLMs by comparing the safety alignment capability of the **entire VLM** to that of its **updated LLM backbone** within the VLM. During the instruction fine-tuning process, both the vision encoder and the LLM backbone parameters are updated, which inevitably impacts the pre-existing safety alignment knowledge of the original LLM backbone. Therefore, the degradation we address is not the comparison between the VLM's safety and the original standalone LLM's safety, but rather between the safety of the VLM as a whole and the updated LLM backbone embedded within it. As such, the goal of our proposed CMRM is to restore the safety alignment of the VLM to the level of its updated LLM backbone, which inherently represents the most aligned state achievable within the VLM after fine-tuning. Regarding the broader question of whether forgotten safety alignment knowledge can be recovered through training-free calibration, we agree this is an intriguing direction for future research, as it would open new pathways for enhancing safety alignment without additional fine-tuning.\"}",
"{\"metareview\": \"This paper investigates safety alignment degradation in Vision-Language Models (VLMs) and proposes Cross-Modality Representation Manipulation (CMRM) as a mitigation approach. The paper shows that incorporating vision modality can cause VLMs to deviate from their LLM backbone's safety properties, and presents an inference-time intervention method to help recover these capabilities.\\n\\n### Strengths:\\n1. Novel and important problem identification\\n> \\\"The issue of safety alignment degradation presents a novel problem that has not been previously explored.\\\" - Reviewer yyM3\\n\\n2. Simple yet effective solution\\n> \\\"The paper presents a relatively simple and effective approach to address it\\\" - Reviewer 8DSj\\n\\n3. Strong empirical validation\\n> \\\"Comprehensive quantitative and qualitative analyses are provided to substantiate the phenomenon of safety alignment degradation in MLLMs\\\" - Reviewer yyM3\\n\\n### Weaknesses:\\n1. Limited evaluation scope\\n> \\\"The validation of the trade-off between safety and general capabilities is limited to only three Vision-Language Models (VLMs) and four benchmarks, which may not be sufficient to generalize the findings\\\" - Reviewer UVRU\\n\\n2. Lack of human evaluation\\n> \\\"The accuracy of using LLaMA-3-8B-Instruct for safety judgment has not been demonstrated, leading to potential unfaithfulness in the evaluation results\\\" - Reviewer yyM3\\n\\n3. Unclear hyperparameter sensitivity\\n> \\\"CMRM requires a hyperparameter \\u03b1, but as stated in Fig. 2, the setting of dataset-level \\u03b1 depends on the dataset and the specific VLM backbone, making it a potentially difficult parameter to adjust\\\" - Reviewer ZbT2\\n\\n\\n### Justification:\\n\\nWhile the paper addresses an important problem in AI safety and proposes an interesting solution, multiple significant concerns remain:\\n\\n#### Technical soundness issues:\\n\\n- Equations inconsistency noted by ZbT2\\n- Limited evaluation scope and lack of human verification\\n- Unclear hyperparameter sensitivity that wasn't fully addressed in rebuttal\\n\\n\\n#### Limited experimental validation:\\n\\n- Only tested on three VLMs and four benchmarks\\n- Missing crucial baselines and comparisons\\n- Incomplete analysis of \\u03b1 parameter sensitivity\\n\\n\\n#### Contribution limitations:\\n\\n- Core method (feature interpolation) is not novel\\n- Results show degraded utility performance\\n- Limited generalizability evidence\\n\\nWhile the authors provided some responses during discussion, key technical concerns remain unaddressed or only partially addressed. The current form of the paper requires substantial improvements in technical validation and experimental analysis before it meets the conference standards.\", \"additional_comments_on_reviewer_discussion\": [\"The authors provided detailed responses to most concerns. They have addressed:\", \"The selection of baseline models and evaluation metrics (Response to W3 from 8DSj)\", \"Plans to expand experiments on MMMU benchmark (Response to W2 from UVRU)\", \"Explanation of \\u03b1 parameter sensitivity (Response to Q1 from yyM3)\", \"However, reviewer yyM3's follow-up comment indicates their concern about \\u03b1 sensitivity was only partially addressed and would benefit from more illustrative results.\"]}",
"{\"summary\": \"The authors propose Cross-Modality Representation Manipulation (CMRM), an inference-time representation intervention method aimed at restoring the inherent safety alignment capabilities of the LLM backbone within VLMs, while preserving their functional abilities. Empirical results demonstrate that this approach recovers the alignment abilities of the LLM backbone with minimal impact on the fluency and linguistic capabilities of pre-trained VLMs, without additional training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The issue of safety in multimodal models is highly significant, and this paper presents a relatively simple and effective approach to address it.\\n2. The paper is well-structured and clearly written.\", \"weaknesses\": \"1. After carefully reviewing the paper, the selection process for meaningless or corrupted images remains unclear. Are these blank images or noise images? The choice of such images is crucial.\\n2. Line 200: When constructing the calibration term, the paper uses VLSafe or manipulated JailbreakLLMs as the anchor dataset. Can the resulting calibration term effectively generalize to out-of-distribution (OOD) images or handle more diverse image types? For example, if VLSafe is used as the anchor dataset, how does this approach perform on the subtask using stable-diffusion-generated images in MM-SafetyBench[1], and across a broader range of other safety tasks within MM-SafetyBench[1] and FigStep[2]? The authors should further address these questions regarding generalizability.\\n3. Utility testing currently employs the ScienceQA dataset, which is domain-specific, while general visual understanding is evaluated on the LLAVA-COCO dataset, which is quite small (90+ images). Can the proposed method maintain utility on more comprehensive benchmarks, i.e., MM-Vet, MMMU, MME? \\n4. Additionally, LLaMA-3-8B may lack precision for this evaluation\\u2014why not use more reliable models such as LLaMA-Guard or GPT-4? Has there been any human verification of accuracy?\\n5. Minor: The related work sections on Safety Alignment for VLMs and Representation Engineering overlook some relevant training-based and inference-based methods for safety improvement (see references [3-6]).\\n\\n[1] MM-SafetyBench: A Benchmark for Safety Evaluation of Multimodal Large Language Models\\n\\n[2] FigStep: Jailbreaking Large Vision-language Models via Typographic Visual Prompts\\n\\n[3] Tuning Language Models by Proxy\\n\\n[4] Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts\\n\\n[5] CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration\\n\\n[6] SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision Language Model\", \"questions\": \"Please refer to weakness. If the authors successfully address my concerns, I would consider increasing the score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper addresses the generation safety of MLLMs. The authors assume that the ideal distribution of MLLMs should adhere to the safely trained backbone of their LLMs. Based on this assumption, they propose a cross-modal representation calibration method to realign the VL distribution with the original safe Language-only distribution.\\nWhile I find the chain of motivation behind this work to be reasonable, I have concerns regarding the foundational assumption and the overall motivation. The assumption appears to be somewhat biased towards prioritizing safety control over the broader development of MLLMs. Additionally, the trade-off between safety and general capabilities is only validated on three MLLMs and 4 benchmarks, which is rather limited.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Despite concerns over the foundational assumption, the chain of assumption, validation, and method, along with the associated visualizations, is clear and easy to follow.\", \"The author proposed inference-time safety intervention is efficient and easy to use.\"], \"weaknesses\": [\"The MLLM series authors focus on align VL representations to language models through the use of adapters. With the introduction of multi-modal data, additional parameters, and different learning techniques, the distribution naturally shifts and should shift towards VL alignment. One might not expect the safety regulations of LLMs to still apply effectively to this shifted input, especially with the potential inclusion of novel information. Thus I find the foundational assumption that the ideal distribution of MLLMs should strictly adhere to the safely trained LLM backbone may be biased towards prioritizing safety over the comprehensive development of MLLMs.\", \"Additionally, upon reviewing the references cited by the authors, I did not find support for the assumption.\", \"The validation of the trade-off between safety and general capabilities is limited to only three Vision-Language Models (VLMs) and four benchmarks, which may not be sufficient to generalize the findings.\", \"It is not clear how the baseline MLLMs are tested in the preliminary study and experiments. Different VL tuning strategies may also affect the findings. For example, it is unclear whether the vision tower is fixed or tuned with VL alignment.\"], \"questions\": \"VLGuard argues that the re-emerged safety issues of MLLMs stem from harmful data encountered during VL instruction fine-tuning, which leads to the forgetting of human safety alignment. I wonder how the proposed mitigation responds to new harmful information, and whether forgotten safety alignment knowledge can be recovered through training-free calibration.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer yyM3\", \"comment\": \"We appreciate the reviewer's insightful feedback. We provide a detailed response below to address the concerns and questions raised by the reviewer.\\n\\n>**W1: Novelty**\\n\\nWe fully acknowledge that feature interpolation itself is not novel, as it has been applied in various domains to address distinct challenges, including VLM hallucination[1]. However, the novelty and core contribution of our work lies in identifying and articulating the specific insight that multi-modal representation shifts are a key factor behind safety alignment degradation in VLMs. Feature interpolation emerges as a natural and effective method to act on this insight, enabling us to directly address the representation shift. Our experiments demonstrate that this approach significantly mitigates safety degradation while preserving model utility, validating its efficacy in this context. We believe this targeted application of feature interpolation, guided by our unique insight, contributes meaningfully to the understanding and enhancement of VLM alignment.\\n\\n[1] Favero, Alessandro, et al. \\\"Multi-modal hallucination control by visual information grounding.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\\n\\n---\\n\\n>**W2: Safety Judgement**\\n\\nThank you for your valuable suggestion. Due to licensing and accessibility constraints, we were unable to use LLaMA-Guard for evaluation in this study. However, we acknowledge the importance of employing more reliable models for evaluation and plan to integrate GPT-4 into our future assessments.\\nRegarding human verification, we have not conducted manual evaluation in this study due to resource limitations. That said, we agree that human verification would provide additional assurance of the accuracy and safety of the outputs and is a meaningful addition for future iterations of this work.\\n\\n---\\n\\n>**Q1: Broader range of alpha values**\\n\\nThank you for the suggestion to analyze a broader range of alpha values, such as alpha \\u2208 {1, 10, 100}. However, as demonstrated in our experiments, when alpha reaches a value as small as 2.0, the model begins to malfunction due to overcorrection. This overcorrection causes the representations to deviate excessively from the intended distribution, undermining both safety alignment and the model's general utility. We have provided a visualization in Figure 3 that illustrates this phenomenon, showing how higher alpha values push the representations too far, leading to degraded performance. Expanding the range to include alpha values significantly larger than 2 would, therefore, not yield meaningful results, as the model's alignment capabilities are already compromised at these lower thresholds. Instead, we chose a finer interval within a practical range (e.g., 0.1) to provide a more precise analysis of the optimal alpha value for effective calibration without overcorrection.\\n\\n---\\n\\n>**Q2: Performance of a pure LLM**\\n\\nThank you for the valuable suggestion. We agree that presenting the performance of a pure LLM, such as Vicuna, on the VLSafe and JailbreakLLMs datasets would provide valuable context and further illustrate the issue of safety degradation. Including these results would allow for a clearer comparison between the safety alignment capabilities of the original LLM backbone and the full VLM, helping to highlight the extent of degradation introduced by the integration of the vision modality. We will incorporate these results in future revisions to enhance the clarity and comprehensiveness of our analysis. Thank you again for this constructive feedback.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Response to Reviewer 8DSj\", \"comment\": \"We appreciate the reviewer's insightful feedback. We provide a detailed response below to address the concerns and questions raised by the reviewer.\\n\\n> **W1: Selection of meaningless or corrupted images**\\n\\nThank you for pointing this out. In our study, we used the original image as it is, without introducing corruption or modification, for simplicity and to maintain consistency across experiments. This approach allowed us to directly evaluate the model's response to unaltered multi-modal inputs. While exploring corrupted images, such as those with Gaussian noise, could provide further insights, our goal in this pilot study was to establish the feasibility of the proposed method using straightforward input configurations. We appreciate your suggestion and consider it a valuable direction for future work.\\n\\n\\n> **W2: Generalization on OOD images**\\n\\nWe appreciate your insightful question regarding the generalizability of the calibration term to OOD images and broader safety tasks. Due to licensing and accessibility constraints, we were unable to directly test our approach on popular benchmarks such as MM-SafetyBench and FigStep, which we acknowledge as valuable resources for future validation.\\nTo partially address generalization, however, we point to the results on ScienceQA (Table 1), where the images come from a distinct distribution compared to VLSafe. These results demonstrate that our method retains competitive performance when evaluated on a different dataset, indicating its potential for generalization beyond the original anchor dataset.\\nWe agree that evaluating our method on diverse tasks and datasets like stable-diffusion-generated images and MM-SafetyBench subtasks would strengthen the claims of generalization. This remains an important direction for future work, and we appreciate your suggestion to further investigate this aspect.\\n\\n> **W3: Utility Test**\\n\\nThank you for this constructive comment. We provide the results on MMMU as follows.\\n\\n| | MMMU-Overall |\\n|----------------|:--------------:|\\n| LLaVA-v1.5-7B | 33.90 |\\n| + $CMRM_{dataset}$ | 33.00 |\\n| LLaVA-v1.5-13B | 36.30 |\\n| + $CMRM_{dataset}$ | 35.80 |\\n\\nThese results are consistent with the observations reported in Table 1, where CMRM demonstrates minimal impact on utility performance while significantly improving safety alignment. Importantly, the marginal differences in utility scores highlight the effectiveness of CMRM in preserving the functional capabilities of the models across different benchmarks.\\nWe recognize the importance of extending utility evaluation to other comprehensive benchmarks, such as MM-Vet and MME, to further validate the robustness of our approach. As such, we are actively working on completing experiments across all settings outlined in Table 1 and will include these results in the revised version of the paper.\\n\\n\\n> **W4: Evaluation Model**\\n\\nThank you for your valuable suggestion. Due to licensing and accessibility constraints, we were unable to use LLaMA-Guard for evaluation in this study. However, we acknowledge the importance of employing more reliable models for evaluation and plan to integrate GPT-4 into our future assessments.\\nRegarding human verification, we have not conducted manual evaluation in this study due to resource limitations. That said, we agree that human verification would provide additional assurance of the accuracy and safety of the outputs and is a meaningful addition for future iterations of this work.\\n\\n> **W5: Missing Relevant Methods**\\n\\nThank you for the valuable suggestion. We acknowledge the importance of including a broader range of relevant works in the related work sections to provide a comprehensive view of VLM safety alignment. We will expand the discussion to incorporate additional training-based and inference-based methods for safety improvement.\"}",
"{\"summary\": \"The paper empirically explains that \\\"safety alignment degradation\\\" is caused by the representation gap introduced after incorporating the visual modality. It provides detailed empirical evidence and proposes an inference-time alignment method called CMRM, which enhances the safety capability of VLMs in handling harmful inputs to a certain extent.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper verifies from a novel experimental perspective that the \\\"safety alignment degradation\\\" of VLMs is caused by the representation gap introduced after incorporating the visual modality.\", \"CMRM enhances VLM backbones' safety performance.\"], \"weaknesses\": [\"Eq. 3 seems to be in conflict with Eq. 6. In Eq. 4 and 5, $v^l_{data}$ and $v^l_{sample}$ seems equal to $\\\\Delta$ defined in Eq. 2. Thus, Eq. 6 should be $h^l_{aligned}=h^l_o+v^l$?\", \"There are already many publicly available VLM Safety Benchmarks, so why is it necessary to additionally construct a VLM Benchmark from the pure-text JailbreakLLM for experiments? What are the advantages of such a constructed benchmark over existing VLM safety benchmarks? It seems that manipulated JailbreakLLM datasets may have difficulty ensuring a high correlation between the vision input and text input. Furthermore, as shown in Table 1, the Unsafe Rate of the VLM backbone is relatively low when both image and text inputs are provided together in the manipulated JailbreakLLM datasets. Replacing the original images with blank or noisy images even increases the Unsafe Rate. Does such a dataset hold reference value in a vision-language setting?\", \"All the experiments were conducted on datasets where the harmful text and images have high similarity. Will CMRM still be effective on datasets where the text instructions are safe, but the visual input contains unsafe typography or text-to-image contents (such as FigStep and MM-SafetyBench)? Does CMRM tend to refuse to answer, or does it provide generic responses unrelated to the image on these datasets?\", \"Will CMRM still be effective when dealing with perturbation-based visual attacks (such as adding noise to images)? The authors should include additional experiments to verify the robustness of CMRM in such scenarios.\", \"As an inference time alignment method, the authors should include some inference-time defense baselines mentioned in related works for comparison in the experiments on both safety and utility performance.\", \"CMRM requires a hyperparameter $\\\\alpha$, but as stated in Fig. 2, the setting of dataset-level $\\\\alpha$ depends on the dataset and the specific VLM backbone, making it a potentially difficult parameter to adjust. When applied to scenarios such as unsafe typography, text-to-image, or perturbation-based visual attack methods, will the setting of $\\\\alpha$ introduce additional challenges or have other impacts?\", \"The impact of different $\\\\alpha$ settings on utility ability needs to be further explored. For instance, at the sample level, when $\\\\alpha=1$, responses such as \\\"I'm sorry, I'm not sure what you mean.\\\" already appear. However, dataset-level CMRM can give a helpful response when $\\\\alpha=1$. The authors need to explain the sensitivity of CMRM to $\\\\alpha$ at both the dataset and sample levels, addressing why such responses occur and how $\\\\alpha$ affects the alignment across different levels.\", \"The paper lacks experiments on the hyperparameters for sample-level CMRM. Since sample-level alignment provides more fine-grained adjustments, the $\\\\alpha$ setting for individual samples should be more sensitive than dataset-level settings.\", \"CMRM results in a certain decrease and impacts utility performance. Similar findings are also reflected in the case study.\"], \"questions\": \"Please see the weaknesses!\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper investigates the problem of safety alignment degradation in multi-modal large language models (MLLMs). The authors demonstrate that the distribution of vision-language representations generated by MLLMs shifts away from the original representation of large language models, which leads to safety alignment degradation. To address this issue, the paper introduces a method called Cross-Modality Representation Manipulation (CMRM), which performs representation manipulation during inference to mitigate this phenomenon. Experimental results show that the proposed CMRM method enables MLLMs to recover their safety alignment capabilities without any additional training.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The issue of safety alignment degradation presents a novel problem that has not been previously explored.\\n\\n2. paper introduces a simple yet effective approach to representation manipulation, aimed at mitigating the degradation phenomenon.\\n\\n3. Comprehensive quantitative and qualitative analyses are provided to substantiate the phenomenon of safety alignment degradation in MLLMs.\", \"weaknesses\": \"1. The concept of feature interpolation is not novel, as similar ideas have already been proposed in other areas, such as classifier-free guidance [1] and contrastive decoding [2].\\n\\n2. The accuracy of using LLaMA-3-8B-Instruct for safety judgment has not been demonstrated, leading to potential unfaithfulness in the evaluation results. The authors need to demonstrate the correlation between human evaluation and model-based evaluation to strengthen the validity of the results.\\n\\n[1] Ho, Jonathan, and Tim Salimans. \\\"Classifier-free diffusion guidance.\\\" arXiv preprint arXiv:2207.12598 (2022).\\n\\n[2] Li, Xiang Lisa, et al. \\\"Contrastive decoding: Open-ended text generation as optimization.\\\" arXiv preprint arXiv:2210.15097 (2022).\", \"questions\": \"1. The sensitivity of alpha is analyzed in Section 4.3 using several values with an interval of 0.1. It would be more effective to illustrate the sensitivity by presenting a broader range of alpha values, such as alpha \\u2208 {1, 10, 100}.\\n\\n2. To illustrate the issue of safety degradation, it would be beneficial to present the performance of a pure LLM (e.g., Vicuna) on VLSafe and JailbreakLLMs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
EEI5R89Cmv | Neural Exploratory Landscape Analysis for Meta-Black-Box-Optimization | [
"Zeyuan Ma",
"Jiacheng Chen",
"Hongshu Guo",
"Yue-Jiao Gong"
] | Recent research in Meta-Black-Box-Optimization (MetaBBO) have shown that meta-trained neural networks can effectively guide the design of black-box optimizers, significantly reducing the need for expert tuning and delivering robust performance across complex problem distributions. Despite their success, a paradox remains: MetaBBO still rely on human-crafted Exploratory Landscape Analysis features to inform the meta-level agent about the low-level optimization progress. To address the gap, this paper proposes Neural Exploratory Landscape Analysis (NeurELA), a novel framework that dynamically profiles landscape features through a two-stage, attention-based neural network, executed in an entirely end-to-end fashion. NeurELA is pre-trained over a variety of MetaBBO algorithms using a multi-task neuroevolution strategy. Extensive experiments show that NeurELA achieves consistently superior performance when integrated into different and even unseen MetaBBO tasks and can be efficiently fine-tuned for further performance boost. This advancement marks a pivotal step in making MetaBBO algorithms more autonomous and broadly applicable. The source code of NeurELA can be accessed at https://anonymous.4open.science/r/Neur-ELA-303C. | [
"Landscape Analysis",
"Black-Box Optimization",
"Meta-Black-Box-Optimization",
"Learning to Optimize"
] | Accept (Poster) | https://openreview.net/pdf?id=EEI5R89Cmv | https://openreview.net/forum?id=EEI5R89Cmv | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xyqBiRpyFW",
"wAFpM5CnsS",
"rWvpbSTEHm",
"oNCum3YkAx",
"jubHlyl9Ld",
"jeacIMnqnu",
"gpnvXyYbDA",
"fg2Ww9vkSo",
"dxHNEHmOCh",
"dvpuqqblHA",
"bDFIQOueE0",
"ZD0QvSnHXZ",
"YxU6OrA2IB",
"W3ffbexefK",
"Ux7oCrywOl",
"UHDHcPPkRK",
"U2JTMknaUf",
"SW9rWsPxgd",
"LG8HO34OJ0",
"HmqCjOh3B3",
"E179ySMGpf",
"CQ6ykAWTxQ",
"4L1k5VszC2",
"3ZWytygS6x"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"comment",
"official_comment",
"official_comment"
],
"note_created": [
1734679267407,
1732288098558,
1732620360660,
1732619025404,
1732287674239,
1732287845259,
1733227081726,
1732288011951,
1737523712096,
1733227474975,
1732613141241,
1732539574553,
1732689071425,
1731164595388,
1732544478767,
1731110331880,
1729624887675,
1732287999227,
1733226026644,
1732287629468,
1730609415695,
1740377408050,
1733225803379,
1732697554144
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5530/Area_Chair_Lg5i"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5530/Reviewer_C3y6"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5530/Reviewer_RDeR"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5530/Reviewer_PkFr"
],
[
"ICLR.cc/2025/Conference/Submission5530/Reviewer_ARe5"
],
[
"ICLR.cc/2025/Conference/Submission5530/Reviewer_RDeR"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5530/Reviewer_PkFr"
],
[
"ICLR.cc/2025/Conference/Submission5530/Reviewer_C3y6"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5530/Reviewer_ARe5"
],
[
"~Zeyuan_Ma1"
],
[
"ICLR.cc/2025/Conference/Submission5530/Reviewer_RDeR"
],
[
"ICLR.cc/2025/Conference/Submission5530/Authors"
]
],
"structured_content_str": [
"{\"metareview\": \"The goal of the work is to learn a landscape featurizer for meta-blackbox optimization. The pipeline of the meta-BBO loop basically consists of:\\n\\n1. A featurizer (e.g. attention-based architecture) which takes in the current trajectory of evaluations $(x,y)$ and outputs a feature.\\n2. This feature is then used to condition a separate blackbox optimizer (which itself needs to be neural-network based, e.g. an LSTM)\\n3. Since this bilevel problem is not differentiable through the featurizer, non-gradient based optimization methods (e.g. CMA-ES) must be used to train the featurizer.\\n4. The featurizer can be frozen and used directly over new test cases, or can also be fine-tuned if the new test situation has a large distribution shift from training (e.g. objective functions are different, or inner-loop algorithm is different)\\n\\nExperiments are conducted over BBOB (deterministic and noisy) and protein design objectives, as well as when the inner-loop algorithm is changed.\", \"ablations_show\": \"1. The Neural ELA features are cleanly separable for determining when the inner-loop algorithm should exploit or explore, whereas traditional ELA features are not so separable.\\n2. If the featurizer has too many learnable parameters, performance degrades since e.g. CMA-ES may suffer in high dimension, and that variants like FCMAES are the best for learning such parameters.\\n\\n## Strengths\\n* After parsing the paper properly, the general logic makes sense. It's understandable why e.g. the featurizer needs to be trained with CMA-ES, and the whole setup is intuitive.\\n* It's interesting that these NeuralELA features can be transferrable (and worst-case just needs a bit of fine-tuning) to new algorithms as well, and not just new objective functions. This means that it's possible to construct universal features for representing the objective landscape, which can be sent to any inner-loop optimizer.\\n\\n## Weaknesses\\n* I needed to re-read the paper multiple times to understand the nuances - The writing could be made much more clean. Currently the paper is too wordy and doesn't give enough \\\"breathing space\\\" between its text. Also, the notation can be reduced significantly, and I think the main method can be described much more simply in at most half a page.\\n* While the paper makes solid contributions, due to the way the paper is written at the moment, there aren't a lot of fundamental or profound conclusions. The paper could've been elevated, if e.g. training was done over a much larger set of algorithms and objectives, and that the featurizer's output became more universal and no longer required fine-tuning at all. \\n\\nGiven the Reviewer scores, for now we can definitely accept the paper for poster presentation. I wouldn't move it to spotlight however, due to the weaknesses raised above.\", \"additional_comments_on_reviewer_discussion\": \"The reviewer scores were (3,8,8,8), leading to a clear acceptance.\", \"the_common_issues_raised_were\": [\"The paper needs to be written better and cleaner, and I agree - this can be fixed during camera-ready version, though.\", \"Computational time, i.e. there is a large upfront cost of pretraining the featurizer, and possibly more if finetuning is involved. There is also the cost of using the featurizer itself at inference time, although it is quite small already.\", \"The upfront cost of pretraining a meta-learned BBO system will always occur and is reasonable that it's required.\", \"Since the training algorithm (e.g. CMA-ES) is zeroth order, it will naturally suffer over higher dimensions.\", \"I personally disagree with this point, seeing as how e.g. ES-based algorithms have been shown to even optimize millions of parameters. I suspect that the specific CMA-ES chosen by the authors isn't optimal for high-D weight training, but they could've used other ES algorithms which are better suited.\", \"Reviewer ARe5 (score of 3) mostly raised an issue of novelty - i.e. there seems to be similar work in this area called \\\"DeepELA\\\", but the authors replied with multiple comparisons to DeepELA, stating that:\", \"DeepELA is for profiling optimization landscapes and not necessarily used by algorithms themselves, while NeuralELA is much more suitable for conditioning algorithms. This means all the downstream mechanics are also different (training objective, training method, etc.)\", \"NeuralELA supports much higher dimensional problems (500+), while DeepELA supports at most 50.\"]}",
"{\"title\": \"Response to Reviewer #C3y6 (2/2)\", \"comment\": \"**[Q1, minimal MetaBBO tasks for good zero-shot performance]**\", \"the_fundamental_principle_of_the_minimal_number_of_metabbo_tasks_for_good_zero_shot_performance_is_that\": \"it should encompass main operating scenarios of existing MetaBBO methods: dynamic algorithm configuration, dynamic operator selection, and dynamic algorithm selection. This is the reason why we choose three MetaBBO tasks as the minimal training tasks in NeurELA. As MetaBBO continues to evolve, we anticipate that this minimal requirement may increase to accommodate newly proposed operating scenarios. However, the training framework of NeurELA does not need to change. Instead, we simply need to augment the training tasks and perform flexible re-training to adapt to the expanded requirements.\\n\\n**[Q2, insights of the learned features]**\\n\\nFollowing your valuable suggestion, we have added an additional experimental analysis to further explore the relationship between NeurELA features and traditional ELA features. Specifically, we use the Pearson Correlation analysis to quantify the correlation between each NeurELA feature and each traditional ELA feature. The experimental methodology, results, and corresponding discussion have been updated in the revised Appendix B.3, along with Figure 4. From the correlation results presented in the figure, we observe some notable relationship patterns between our NeurELA features and the traditional ELA features: \\n\\na) Four NeurELA features (F1, F4, F8, and F16) are novel features learned by NeurELA, exhibiting weak correlation (< 0.6) with all traditional ELA features.\\n\\nb) Some NeurELA features strongly correlate with a specific feature group in traditional ELA, such as F3, which aligns closely with the Meta-model group.\\n\\nc) Some other NeurELA features strongly correlate with multiple traditional ELA feature groups, such as F10, which is highly correlated with both the Distribution and Meta-model groups.\\n\\nd) All NeurELA features show weak correlation with the Convexity and Local Landscape groups, suggesting these groups are less relevant for addressing MetaBBO tasks. \\n\\nWe appreciate the reviewer for this valuable suggestion, which significantly helps improve the interpretability of NeurELA. We hope the above results and discussion could address your concern. Note: we have also added some text content into the revised paper (line 465-468, colorred in blue) to guide readers to check this interpretation analysis.\\n\\n**[Typos]**\\n\\nWe sincerely apologize for our oversight during proofreading and have corrected the typos you mentioned. Additionally, we have conducted a thorough and systematic review of all text content, figures, and tables to ensure accuracy and clarity.\"}",
"{\"comment\": \"It is an honor for us. Thanks again for your precious time and efforts!\"}",
"{\"comment\": \"Thank you for your detailed answer. I am satisfied with your explanations and will maintain my score.\"}",
"{\"title\": \"Response to Reviewer #PkFr\", \"comment\": \"We appreciate the reviewer for the valuable comments. We also thank you for acknowledging NeurELA as a novel and interesting work, with superior performance, convincing ablations and positive code sharing. We provide following point-to-point responses to address your remaining concerns.\\n\\n**[W1 & W2, presentation & writing]**\\n\\nFollowing your valuable suggestion, we have carefully checked and refined the typos in the revised paper, including the text content, figures and tables. We also agree with the reviewer that a more detailed description of the workflow in the previous section would enhance the understanding of the readers, hence, we have added some text content in the beginning of the introduction of the revised paper (line 039-046, colorred in blue) to this end.\\n\\n**[W3, meaning of \\u201cepoch\\u201d in Figure 4]**\\n\\nWe would like to clarify that the \\u201cZero-Shot\\u201d mode of NeurELA refers to integrating the pre-trained $\\\\Lambda_\\\\theta$ into the neural network group of a given MetaBBO method to substitute its original feature extraction mechanism. The neural network of the MetaBBO methods still requires the meta-learning process to learn a useful policy on the training problem set, while $\\\\Lambda_\\\\theta$ is frozen. In contrast, for the \\u201cFine-tuning\\u201c mode, $\\\\Lambda_\\\\theta$ is activated and co-trained as part of the meta-learning process. Hence, we can plot the two performance gain curves along with the training epochs.\\n\\n**[W4, computational overhead]** \\n\\nWe argue that, as shown in the top part of Table 1, the computational overhead of NeurELA is in the similar level with the original MetaBBO baselines across different problem dimensions. Moreover, as shown in the bottom part of Table 1, when the number of sample points increases, NeurELA consumes significantly less time to compute the features than the original MetaBBO baselines. We kindly invite the reviewer to examine these results. Considering the consistent performance improvements demonstrated in Figure 3, along with the comparable feature computation wall time, we believe this instead underscores the contribution of our NeurELA.\"}",
"{\"title\": \"Response to Reviewer #ARe5 (part 1/2)\", \"comment\": \"We appreciate the reviewer for recognizing our paper being well-structured with good quality. Below, we provide point-by-point responses to address your concerns.\\n\\n**[W1, differences with Deep-ELA]**\\n\\nThank you for raising questions regarding the distinctions between NeurELA and Deep-ELA. Although we have included **a discussion on this issue in Section 2 (lines 187\\u2013194)**, we are happy to expand and clarify the distinctions here.\\n\\n1. **Target Scenario:** NeurELA is explicitly designed for MetaBBO tasks, where dynamic optimization status is critical for providing timely and accurate decision-making at the meta level. In contrast, Deep-ELA serves as a static profiling tool for global optimization problem properties and is not tailored for dynamic scenarios. NeurELA supports dynamic algorithm configuration, algorithm selection, and operator selection. In contrast, Deep-ELA\\u2019s features are restricted to static algorithm selection and configuration, limiting its adaptability in dynamic MetaBBO workflows.\\n2. **Feature Extraction Workflow**: Considering the feature extraction workflow, NeurELA distinguishes with Deep-ELA in two key aspects: \\n 1. First, **NeurELA addresses the limited scalability of Deep-ELA for high dimensional problem** . Concretely, the embedding in Deep-ELA is dependent on the problem dimension and hence the authors of Deep-ELA pre-defined a maximum dimension (50 in the original paper). To address this, NeurELA proposes a novel embedding strategy which re-organizes the sample points and their objective values {Xs, Ys} to make the last dimension of the input tensor is 2 (Section 3.2, line 267-291). This embedding format has a significant advantage: the neural network of NeurELA is hence capable of processing any dimensional problem and any number of sample points.\\n 2. Second, **NeurELA enhances the information extraction through its two-stage attention-based neural network**. Specifically, when processing the embedded data, Deep-ELA leverages its self-attention layers for information sharing across sample points only. In contrast, NeurELA incorporates a two-stage attention mechanism, enabling the neural network to first extract comprehensive and useful features across sample points (cross-solution attention, lines 296\\u2013297) and then across problem dimensions (cross-dimension attention, lines 298\\u2013299). This design helps mitigate computational bias and improve feature representation.\\n3. **Training Method:** The training objective and training methodology in NeurELA and Deep-ELA are fundamentally different. Deep-ELA aims to learn a neural network that could serve as an alternative of traditional ELA. Its training objective is to minimize the contrastive loss (InfoNCE) between the outputs of its two prediction heads (termed as student head and teacher head) by gradient descent, in order to achieve invariance across different landscape augmentation on the same problem instance. In contrast, the training objective of NeurELA is to learn a neural network that could provide dynamic landscape features for various MetaBBO tasks. Specifically, its objective is to maximize the expected relative performance improvement when integrated into different MetaBBO methods. Since such relative performance improvement is not differentiable, NeurELA employs neuroevolution as its training methodology. Neuroevolution is recognized as an effective alternative to gradient descent, offering robust global optimization capabilities.\\n\\nIn summary, NeurELA and Deep-ELA are significantly different works, with distinct target operating scenarios, algorithm design tasks, neural network designs and workflows, as well as training methodologies. We hope the above detailed explanation would clear your concern. We have added this discussion into the revised Appendix B.2 (colorred in blue). To guide readers to this discussion, we have also updated text content in Section 2, line 187-193 (colorred in blue) of the revised paper.\\n\\n[**W2 & Q1, comparison with Deep-ELA**]\\n\\nFollowing your suggestion, we have added Deep-ELA as a baseline in our experiments. Specifically, we utilized the open-sourced Deep-ELA model (large_50d_v1, https://github.com/mvseiler/deep_ela/tree/main/deep_ela/models), and the testing followed the same procedure used for NeurELA and other baselines. We have updated the results in Figure 3 of the revised paper. Overall, our NeurELA consistently outperforms Deep-ELA and demonstrates substantial and reliable performance improvements across the tested MetaBBO methods.\"}",
"{\"comment\": \"This is just a comment. Perhaps the current paper title, \\\"Neural Exploratory Landscape Analysis,\\\" might be confusing because the target application of the proposed method is not clear. To clarify the difference with the existing Deep-ELA in the title, a more explicit title might be appropriate, such as \\\"Neural Exploratory Landscape Analysis for Meta-Black-Box Optimization.\\\"\"}",
"{\"title\": \"Response to Reviewer #C3y6 (part 1/2)\", \"comment\": \"We sincerely appreciate the reviewer for acknowledging our NeurELA as a well-motivated approach with rigorous, clear and comprehensive experimental analysis. We have carefully provided following point-to-point responses to clear your remaining concerns.\\n\\n**[W1, analyse zero-shot failure]**\\n\\nThe zero-shot failure refers to the case where NeurELA is integrated into the MetaBBO method GLEET for the BBOB problem set (as shown on the left side of Figure 4), resulting in a performance score of 0.64, which is below expectations. There are two main reasons which lead to this unexpected result: a) the unique and complex attention-based neural network design in GLEET. b) the relatively simpler landscapes of the tested problem instances in BBOB set. We locate the reasons by further examining the output landscape features of NeurELA for the tested problem instances in BBOB and found that the layer normalization in our design would narrow the feature value range. The narrowed feature values further go through the GLEET\\u2019s attention layers, which are further narrowed by the layer normalization there. This possibly causes the meta-level policy (GLEET\\u2019s actor network, a MLP) confused about the decision bound hence causes the unexpected performance. In contrast, when we integrate NeurELA into GLEET for BBOB-noisy set and Protein-docking set, the zero-shot perforamnce is ideal. This is because the problem instances in these sets have intricate landscapes, with significant differences that remain distinguishable even after being narrowed twice. It is also worth noting that fine-tuning NeurELA with GLEET during its meta-learning process could resolve this problem, which forces the learning of the decision bounds, as shown in the left of Figure 4. Furthermore, this fine-tuning process is very efficient since it only consume 20 training epochs to attain similar performance of the original GLEET baseline. \\n\\n**[W2, training efficiency]**\\n\\nWe would like to clarify that, although a limitation of NeurELA is the training efficiency when using ES for the neuroevolution of larger models, the experimental results in our zero-shot performance (Figure 3) and in our inference efficiency (Table 1) have demonstrated that, with a relatively small model, NeurELA achieves more effective landscape feature extraction with less computational overhead than traditional ELA features and a comparable level of computational overhead to the original designs in existing MetaBBO methods.\\n\\n**[W3, theoretical justification of network design]**\\n\\nWe provide an intuitive explanation why our neural network can work well here. First of all, if we look at the input and output of the traditional ELA and our NeurELA, they are same: the input is sample points and their objective values {Xs, Ys} and the output is the landscape features summarized from the input. The difference is the calculation rules. In traditional ELA, the rules are a series of human-crafted principles. In contrast, the rules in NeurELA are learned under the tailored training objective we proposed in Eq. (4). the learned neural network-based rules are inhererently more compatible with the target operating scenario: MetaBBO tasks. This is demonstrated by our experimental results (Figure 3).\\n\\nWe also provide a straightforward explanation of the two-stage attention-based network design, which we stated at the beginning of Appendix A.1. There are three motivations: a) generalizability, the neural network should be able to handle optimization problems with different dimensions. b) scalability, the neural network should be sufficiently efficient as the number of sample points scales. c) computational completeness, the neural network should involve computation not only across sample points but also across each problem dimension. To address a), we chose attention-based network which could process any number of sample points by the attention mechanism. To address b), we chose attention-based network since attention mechanism holds highly parallelizable property. To address c), we designed the two-stage attention. The first stage is cross-solution attention, which promotes the feature computation among the sample points. The second stage is the cross-dimension attention, which further promotes the information sharing among different problem dimensions. By doing so, the neural network parameters (the attention query, key, value weights) is trained to extract useful features of the optimization status information.\\n\\nWe hope this explanation addresses your concerns and clarifies the design and functionality of NeurELA.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"We appreciate your suggestion! We agree that the suggested title might be more appropriate and would adopt it if we could change our title in the final version of the paper. Thanks for such valuable suggestion!\"}",
"{\"title\": \"Request for further feedback\", \"comment\": \"Dear reviewer #ARe5:\\n\\nSince the discussion period is extended, we respectifully request you to check the experimental restults and discussion we have added following your constructive suggestions. We have given a more comprehensive discussion on the differences between our NeurELA and a recent work DeepELA to demonstrate the novelty of this work. We have shown the superiority of NeurELA to DeepELA on a) MetaBBO tasks, b) high-dimensional optimization scanarios. Furthermore, we have statistically analysed the correlation of NeurELA features and traditional ELA features to enhance interpretability. If you have any further instructions, we are open to them and would cooperate with you to make this paper better.\\n\\nBest regards, the authors\"}",
"{\"comment\": \"I am satisfied by the authors' replies and updated text/supplementary. Thus I increased my score by one point.\\n\\nMy only comment is that the authors should make the meaning of \\\"epoch\\\" and \\\"zero-shot mode\\\" clearer in the text.\"}",
"{\"comment\": \"I appreciate the authors\\u2019 responses; however, they do not fully address my concerns. First, the differences between Deep-ELA and the proposed method appear marginal. Specifically, using common techniques such as adding positional embeddings or cross-attention and applying them to a different problem (while relying on an existing objective function) cannot be considered significantly different. Thus, the novelty of the work remains a major concern. Second, the paper lacks comparisons with other methods beyond traditional ELA, and the current results are not sufficiently convincing. Overall, I believe the paper overstates its novelty and contribution and does not meet the requirements for publication.\"}",
"{\"summary\": \"This paper proposes an automatic construction method of landscape features for meta black-box optimization (MetaBBO). The proposed approach, termed neural exploratory landscape analysis (NeurELA), trains the attention-based neural network extracting landscape features to improve the MetaBBO algorithms. While existing MetaBBO methods rely on human-crafted landscape features, the proposed NeurELA can automate the design process for landscape features in MetaBBO. The experimental result demonstrates that the proposed NeurELA outperforms existing MetaBBO methods on unseen MetaBBO tasks. In addition, the authors show that fine-tuning process can lead to further performance improvement.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The motivation for optimizing the feature extractor in MetaBBO is reasonable.\", \"The technical novelty of this paper is to present automating approach for extracting landscape features in MetaBBO.\", \"The effectiveness of the proposed NeurELA is experimentally demonstrated for several MetaBBO and BBO problems.\"], \"weaknesses\": [\"The proposed formulation seems to be a tri-level optimization problem of training landscape feature extractor, training meta-level policy, and optimizing a target objective function. Therefore, using the proposed NeurELA increases the whole computational cost compared to existing MetaBBO methods.\", \"Training the landscape feature extractor is performed in a neuroevolution manner. It seems hard to scale for a large neural network as the feature extractor. In addition, it is not clear that the current setting, i.e., optimizing 3,296 parameters for 500 evaluations by the evolution strategy, is sufficient for convergence.\"], \"questions\": \"1. I suppose that the computational cost of the proposed approach is larger than the existing MetaBBO because it requires the training of a landscape feature extractor as the outer loop for MetaBBO. What is the exact computational time/cost for the proposed method compared to existing MetaBBO methods?\\n1. If the baseline methods can use the same computational budgets, how is the performance gain of the proposed approach? For instance, extra budgets may be used to optimize hyperparameters or to select traditional ELA features in the baseline MetaBBO.\\n1. Training the landscape feature extractor in the current setting seems challenging because it should optimize the high-dimensional parameters by the evolution strategy. Is there any empirical evidence for the convergence of ES as the outer loop optimizer?\\n1. What kind of BBO or MetaBBO algorithms can be used with the proposed NeurELA? The authors might assume the population-based BBO algorithms or evolutionary algorithms. Is it possible to combine the NeurELA with other kinds of BBO methods?\\n1. What is the exact number of dimensions of the NeurELA features in the experiment? What is the impact of the dimensionality of the EeurELA features on the performance?\\n1. I could not find the definition of the performance metric in the experimental evaluation, e.g., in Figure 3. Could you provide a detailed definition of the performance metric in the experimental results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for Reviewer PkFr\", \"comment\": \"We sincerely appreciate your positive feedback on our NeurELA! Thanks for the time and efforts you have contributed to improve our paper.\"}",
"{\"summary\": \"The paper introduces Neural Exploratory Landscape Analysis (NeurELA), a novel framework designed to improve Meta-Black-Box Optimization (MetaBBO) by dynamically profiling optimization landscape features through a two-stage, attention-based neural network. Unlike traditional approaches that rely on human-crafted features, NeurELA learns these features automatically in an end-to-end manner. This is aimed at overcoming the limitations of existing Exploratory Landscape Analysis (ELA) methods, such as computational overhead and reliance on expert knowledge. The authors propose a novel neural-network based landscape analyzer, and propose a two-stage attention mechanism: this architecture facilitates robust feature extraction, capable of generalizing to various MetaBBO algorithms without manual adjustments. The novel framework is tested in three MetaBBO tasks against several baselines and ablations.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The new end-to-end pipeline is (imho) novel and interesting\", \"NeurELA consistently outperforms the baselines\", \"Many ablations are performed and thus we better understand the effectiveness of the approach and its parts\", \"The sharing of the code is appreciated\"], \"weaknesses\": [\"**Weaknesses before author's rebuttal. I believe that most of my comments have been (at least partly) by the new version/reply by the authors.**\", \"The presentation of the paper needs quite some work. Many typos are present and a few paragraphs are hard to read. Overall, the authors need to spend a bit more time in improving the presentation.\", \"I believe that a more detailed description of the MetaBBO tasks would greatly help the reader understand and appreciate the performance of the proposed framework.\", \"In Section 4.2 and Fig. 4 it is not clear to what \\\"one epoch\\\" corresponds to. For example, I do not understand how we can compare \\\"ZeroShot\\\" with \\\"Finetune\\\" in the same scale. If I understand correctly, the \\\"ZeroShot\\\" variant just uses the learned $\\\\Lambda_{\\\\theta}$ to select at each generation of the low-level optimizer the appropriate feature vector and configuration. On the contrary, the \\\"Finetune\\\" variant should run the whole meta-learning pipeline. Thus, how can we compare \\\"epochs\\\" of one to the other? We need more explanation here to appreciate the results.\", \"The computational overhead of NeurELA framework is quite big. Also compared to the original variant ($\\\\Lambda_0$) the gains are not big even when a big number of samples ($m$) is used.\"], \"questions\": \"See weaknesses..\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes Neural Exploratory Landscape Analysis (NeurELA), a framework to replace hand-crafted landscape analysis features in Meta Black-Box Optimization (MetaBBO) with a learned neural network approach.\", \"the_key_contributions_are\": [\"making the MetaBBO paradigm entirely end-to-end\", \"A two-stage attention-based neural network architecture for dynamic landscape feature extraction\", \"A multi-task training framework to learn from multiple MetaBBO algorithms simultaneously\", \"Demonstration of zero-shot generalization to unseen algorithms and problem sets\", \"Ability to fine-tune the pre-trained analyzer for specific tasks\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The approach is well-motivated and builds on established MetaBBO literature\", \"The experimental methodology is rigorous with proper baselines and ablations\", \"Results demonstrate clear improvements over traditional ELA features\", \"Comprehensive experiments across multiple MetaBBO algorithms\", \"Testing on both synthetic benchmarks and real-world problems (protein docking)\"], \"weaknesses\": [\"Could better analyze when/why zero-shot generalization fails\", \"The authors acknowledge training efficiency issues with larger models\", \"Lacks theoretical justification for why the two-stage attention architecture works well\"], \"questions\": [\"What is the minimum number of training tasks needed for good zero-shot generalization?\", \"Could you provide more insight into what features the network learns compared to traditional ELA?\", \"### Nitpicking\", \"L110: an universal neural landscape analyser... -> a universal\", \"L176: (Prager et al., 2021b) first proposed... -> wrong citation type, should use \\\\citet instead of \\\\citep\", \"L318: We hence introduce Ray Moritz et al. (2018), an open-source... -> wrong citation type + you don't introduce Ray, you \\\"employ\\\" Ray\", \"L350: functiona... -> funtion\", \"L367: Recall that an MetaBBO task is... -> that a MetaBBO\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer #ARe5 (part 2/2)\", \"comment\": \"**[W3, high dimensional optimization scenarios]**\\n\\nWe would like to clarify that NeurELA\\u2019s two-stage attention-based feature extractor is inherently capable of handling optimization problems of any dimensionality. As per your suggestion, we conducted additional experiments to evaluate the zero-shot performance of NeurELA on MetaBBO methods and CoCo BBOB problems with 100 and 500 dimensions. The results are presented in the following tables (Deep-ELA is excluded as it only supports up to 50 dimensions). The results demonstrate that NeurELA effectively boosts the performance of MetaBBO methods on high-dimensional problems by leveraging its generalizable and scalable two-stage attention-based feature extractor. \\n\\nResults on BBOB-100D\\n\\n| | LDE | RLEPSO | RL-DAS | RLPSO | DEDDQN | GLEET | MELBA | GLHF |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Original | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |\\n| ELA | 0.24 | 0.17 | 0.82 | 0.77 | 0.93 | 0.41 | 0.83 | 0.7 |\\n| NeurELA | **1.42** | **1.51** | **1.76** | **1.21** | **2.86** | 0.75 | **2.06** | **1.44** |\\n\\nResults on BBOB-500D\\n\\n| | LDE | RLEPSO | RL-DAS | RLPSO | DEDDQN | GLEET | MELBA | GLHF |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Original | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |\\n| ELA | 0.2 | 0.19 | 0.79 | 0.62 | 0.84 | 0.51 | 0.71 | 0.55 |\\n| NeurELA | **1.37** | **1.14** | **1.37** | **1.05** | **1.76** | **1.09** | **1.48** | **1.09** |\\n\\n**[W4, interpretability and relation with traditional ELA]**\\n\\nWe would like to clarify that in Section 4.3, Figure 5, we have conducted an initial interpretability analysis on what NeurELA has learned compared with the traditional ELA. This analysis revealed that for a given problem, our NeurELA can provide a clear decision bound during the optimization dynamics, whereas traditional ELA cannot, since it is developed to describe static optimization problem properties rather than dynamic ones.\\n\\nFollowing your valuable suggestion, we have added an additional experimental analysis to further explore the relationship between NeurELA features and traditional ELA features. Specifically, we use the Pearson Correlation analysis to quantify the correlation between each NeurELA feature and each traditional ELA feature. The experimental methodology, results, and corresponding discussion have been updated in the revised Appendix B.3, along with Figure 4. From the correlation results presented in the figure, we observe some notable relationship patterns between our NeurELA features and the traditional ELA features: \\n\\na) Four NeurELA features (F1, F4, F8, and F16) are novel features learned by NeurELA, exhibiting weak correlation (< 0.6) with all traditional ELA features.\\n\\nb) Some NeurELA features strongly correlate with a specific feature group in traditional ELA, such as F3, which aligns closely with the Meta-model group.\\n\\nc) Some other NeurELA features strongly correlate with multiple traditional ELA feature groups, such as F10, which is highly correlated with both the Distribution and Meta-model groups.\\n\\nd) All NeurELA features show weak correlation with the Convexity and Local Landscape groups, suggesting these groups are less relevant for addressing MetaBBO tasks. \\n\\nWe appreciate the reviewer for this valuable suggestion, which significantly helps improve the interpretability of NeurELA. We hope the above results and discussion could address your concern. Note: we have also added some text content into the revised paper (lines 465-468, colorred in blue) to guide readers to check this interpretation analysis.\\n\\n**[Q2, training done on GPU?]**\\n\\nWe apologize for the typo. The statement refers to the two-stage attention mechanism being well supported for parallelization on CPUs by pytorch. We have updated it in the revised paper. \\n\\n**[Q3, wall time comparison]**\\n\\nFirst, the wall time comparison focuses solely on the feature computation time for NeurELA, traditional ELA, and the original design, which is a subcomponent of the entire MetaBBO workflow, to ensure a fair and accurate comparison. Additionally, we kindly direct the reviewer\\u2019s attention to the upper part of Table 1, where we have already included results for 1000-dimensional problems\\u2014a significantly large scale. In this case, the computation time for traditional ELA is no longer in milliseconds but approximately 300 seconds. This result clearly demonstrates the superior computational efficiency of NeurELA compared to traditional ELA, even in high-dimensional settings.\\n\\nFinally, we sincerely thank the reviewer for all valuable comments and suggestions, which have significantly helped us improve the paper quality. We hope the above responses could clear your concerns and look forward to your positive feedback in the remaining time of the rebuttal phase.\"}",
"{\"comment\": \"We sincerely appreciate your positive feedback on our NeurELA! Thanks for the time and efforts you have contributed to improve our paper.\"}",
"{\"title\": \"Response to Reviewer #RDeR\", \"comment\": \"We appreciate the reviewer for acknowledging our NeurELA as a novel landscape analysis framework with reasonable motivation and effective performance. For your remaining concerns, we provide following point-to-point responses to address them.\\n\\n**[W1 & Q1 & Q2, increased computational cost]**\\n\\nWe would clarify that although certain computational cost is required to train NeurELA (through neuroevolution on multiple MetaBBO methods and downstream BBO problems), we have demonstrated in the experiments (Figure 3, zero-shot performance) that the trained NeurELA can be seamlessly integrated into existing MetaBBO methods to provide effective dynamic landscape analysis, without further re-training. That is, NeurELA can be regarded as a feature extractor exactly the same as the traditional ELA. We also provide the inference wall time comparison in Table 1 to compare the computational cost required to obtain the landscape feature by our NeurELA and traditional ELA, where the results show that NeurELA require less processing time than traditional ELA, particularly for the high-dimensional problem, this is due to the attention-based neural network which facilitates highly paralleled computation. We believe this explanation could clear your concern in Q1 and Q2.\\n\\n**[W2 & Q3, training convergence]**\\n\\nWe have included the training convergence curves of various ES baselines in the revised Appendix, as shown in Figure 3. We kindly request the reviewer to review the results in the newly added Appendix B.1 (colorred in blue), which show that under our setting, the Fast CMAES adopted for training NeurELA converges and achieves superior training effectiveness to other ES baselines. \\n\\n**[Q4, usage scope of NeurELA]**\\n\\nWe would like to clarify that NeurELA mainly focuses on population-based BBO paradigm since the two-stage attention-based feature extractor we proposed is designed to process the information of a collection of sample points by first promoting the information sharing among all sample points and then promoting the relationship detection across dimensions. By doing this, we can provide accurate and dynamic landscape feature for the subsequent MetaBBO tasks. We believe NeurELA could be integrated into some human-crafted population-based BBO methods, which requires landscape feature for dynamic algorithm configuration or operator selection. This is supported by our interpretability analysis in Section 4.3, Figure 5. We can observe the clear decision bound of our NeurELA features when profiling the dynamic optimization status. This clear decision bound could serve as useful features for those human-crated population-based BBO methods. \\n\\n**[Q5, dimension of NeurELA features]**\\n\\nThe dimension of NeurELA features is 16, which we have stated in Section 4, line 347-348. We have discussed the dimensionality of NeurELA features in the model complexity discussion part (Section 4.4, line 503-517), where we compare the feature dimensions 16, 64 and 128. Due to the limitations of the ES baseline in effectively searching for the optimal NeurELA neural network, increasing the feature dimension could not lead to performance improvement.\\n\\n**[Q6, performance metric definition]**\\n\\nWe would like to clarify that we have provided the performance metric definition at the beginning of Section 4.1, line 373-377. It is exactly the exponential of the relative performance we have designed in Eq. (3). Under this setting, the performance of the original MetaBBO method is always 1. For our NeurELA and traditional ELA, a performance value larger than 1 indicates that substituting the original feature extraction design by NeurELA or traditional ELA could improve the optimization performance, and vice versa.\"}",
"{\"summary\": \"The paper introduces Neural Exploratory Landscape Analysis (NeurELA), a framework for Meta-Black-Box Optimization (MetaBBO) that replaces traditional, human-crafted Exploratory Landscape Analysis (ELA) features with a fully neural network-based approach. NeurELA employs a two-stage, attention-based neural network trained via a neuroevolution to dynamically profile optimization landscapes, adapting its features in real time. The authors show that NeurELA enhances performance across various MetaBBO algorithms and generalizes effectively to unseen tasks for zero-shot generalization and fine-tuning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. Quality: The paper validates the proposed method in detail by answering research questions, not only on the method's performance but also its adaptability, computational efficiency, and generalization capacity.\\n\\n2. Clarity: The paper is well-structured, with clear explanations of NeurELA\\u2019s architecture, training, and integration within MetaBBO tasks.\", \"weaknesses\": \"1. Originality: The proposed work is very similar to Seiler et al., 2024 (Deep-ELA), which also uses multi-head attention as the main component in the architecture. The only difference seems to be that Deep-ELA uses kNN embedding, while the proposed method uses a linear transformation to encode the population information, which is widely used in LLMs to generate embedding from tokens.\\n\\n2. Limited comparisons in experiments: The proposed work does not compare to any recent methods, e.g., Deep-ELA. \\n\\n3. Limited tasks: Although NeurELA is tested across a variety of MetaBBO algorithms and optimization problems, the experiments lack a detailed analysis of its performance in higher-dimensional optimization scenarios, where many MetaBBO algorithms struggle.\\n\\n4. Interpretability and feature analysis: Although NeurELA shows promise in dynamically adapting landscape features, there is limited discussion on the interpretability of these features in relation to traditional ELA metrics.\", \"questions\": \"1. What's the major difference between the proposed method and Deep-ELA? A more detailed explanation and corresponding experiments and/or ablation studies will help better support the paper's novelty and contribution.\\n\\n2. Line 479: \\\"This is primarily owning to the two-stage attention architecture in our NeurELA, which can be efficiently computed on GPUs in parallel.\\\" Was the training done on GPU? It was mentioned in line 356 that the training uses a Slurm CPU cluster. \\n\\n3. The wall time is too short to really tell a difference between algorithms since programming/system processing could have a larger effect if it's in milliseconds. Please consider using a more complicate benchmark or increasing the dimensions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Change of Title\", \"comment\": \"Dear AC #Lg5i:\\n\\nFollowing the suggestion from the meta-review and our reviewers, we have changed title of this paper from \\\"Neural Exploratory Landscape Analysis\\\" to \\\"Neural Exploratory Landscape Analysis for Meta-Black-Box-Optimization\\\", which further clarify the scope of our paper and could help future readers more. We would like to note that this change would not harm any content consistency with the previous version. We appreciate your contribution on ICLR and particularly on our paper, thanks!\\n\\nBest regards,\\nthe authors\"}",
"{\"comment\": \"Thank you for the response and updated paper. The authors' response solved my concerns. Therefore, I would increase my score.\"}",
"{\"title\": \"Response to Reviewer #ARe5\", \"comment\": \"We appreciate the reviewer's timely feedback.\\n\\nFirst, we would clarify that **we have provided a detailed elaboration on differences between our NeurELA and DeepELA** in the last response ( also have added this discussion to Appendix B.2). We briefly summarize these significant novelties here:\\n\\na) **We found that the reviewer overlooked one of the most significant novelty against DeepELA**: NeurELA specializes at providing dynamic landscape feature for MetaBBO tasks. In contrast, DeepELA specializes at providing static optimization problem properties for other algorithm analysis and design tasks. This is demonstrated by adding DeepELA as a new baseline and compare it with NeurELA on MetaBBO tasks (following your suggestion). The experimental results are presented in Section 4.1, Figure 3. \\n\\nb) **NeurELA addresses the scalability of DeepELA on high-dimensional problems** through the novel dimension-independent embedding design and the specific two-stage attention mechanism. The latter also helps fine-grained feature extraction across not only sampled points but also the dimensions within them. The experimental results of NeurELA on high-dimensional problem are provide in the last response (following your suggestion), where NeurELA outperforms the baselines in 100D and 500D problems. DeepELA can not handle such cases since its neural network structure is allowed to process maximum 50D problems.\\n\\nc) **NeurELA proposes a novel training objective (Section 3.1, lines 253-259, Equation (4)) and training method** **(Section 3.3, lines 308-328)** to train the proposed network strcture. Under this training objective, pre-trained NeurELA provide more useful optimization status features than baselines (demonstrated by Section 4.1, Figure 3), while comsuming less computational time (demonstrated by Section 4.4, Table 1), especially on high-dimensional and more sample points cases. \\n\\nSecond, **we respectifully request the reviewer for specific instructions**: a) what methods should we add to compare beyond traditional ELA? b) which part of the results are not convincing? We hope for these specific instructions hence we can further address your concerns and improve our paper.\\n\\nAt last, thanks for your time and efforts, we look forward to your further feedback.\"}"
]
} |
EE2tIwKhSW | Real-World Benchmarks Make Membership Inference Attacks Fail on Diffusion Models | [
"Chumeng Liang",
"Jiaxuan You"
] | Membership inference attacks (MIAs) on diffusion models have emerged as potential evidence of unauthorized data usage in training pre-trained diffusion models. These attacks aim to detect the presence of specific images in training datasets of diffusion models. Our study delves into the evaluation of state-of-the-art MIAs on diffusion models and reveals critical flaws and overly optimistic performance estimates in existing MIA evaluation. We introduce CopyMark, a more realistic MIA benchmark that distinguishes itself through the support for pre-trained diffusion models, unbiased datasets, and fair evaluation pipelines. Through extensive experiments, we demonstrate that the effectiveness of current MIA methods significantly degrades under these more practical conditions. Based on our results, we alert that MIA, in its current state, is not a reliable approach for identifying unauthorized data usage in pre-trained diffusion models. To the best of our knowledge, we are the first to discover the performance overestimation of MIAs on diffusion models and present a unified benchmark for more realistic evaluation. | [
"Membership inference attack",
"Diffusion models",
"Benchmark"
] | Reject | https://openreview.net/pdf?id=EE2tIwKhSW | https://openreview.net/forum?id=EE2tIwKhSW | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yxp98XsNjl",
"xu2YVJo6sp",
"xHHQF3ESqP",
"pQHsOhDoWL",
"nW8Vezt67r",
"n0CNfzoh5p",
"f77y8oYYSw",
"bLdOkh4j6W",
"aUzogFAjub",
"WfJNguhyyr",
"WdI8EL6tIr",
"JVokymrz0h",
"In9DsgBw4k",
"GrS6IpNEZ8",
"AhQCr9kMjk",
"6KZUbQXF24",
"2pEX6rR4Gu",
"1YWJFXpqHW"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1730734048571,
1732815645890,
1732216840858,
1732216623554,
1732216528328,
1732949243001,
1733159302316,
1733159150568,
1732216702516,
1732780354964,
1737523852655,
1732216440601,
1734068903738,
1730236118768,
1733158689301,
1730695170910,
1730633852717,
1732216694603
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7638/Reviewer_o9bC"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7638/Reviewer_o9bC"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7638/Area_Chair_DbKD"
],
[
"ICLR.cc/2025/Conference/Submission7638/Reviewer_y1k9"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
],
[
"ICLR.cc/2025/Conference/Submission7638/Reviewer_eNEy"
],
[
"ICLR.cc/2025/Conference/Submission7638/Reviewer_iXFF"
],
[
"ICLR.cc/2025/Conference/Submission7638/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"The paper introduces a straightforward yet powerful benchmark to assess the performance of existing Membership Inference Attacks (MIA) on pre-trained diffusion models within the context of data authorization. The authors identified \\\"overtraining\\\" and \\\"dataset shifts\\\" as two significant limitations of current MIA methods. To address these issues, they developed a benchmark featuring five experimental setups.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The writing is clear\", \"The structure is easy to follow\", \"The paper considered comprehensive comparison with the relate works\"], \"weaknesses\": [\"I am unsure about the input for the membership inference attacks. In Lines 113-116, does x refer solely to the image, or is it a combination of the image and its prompt? I recommend that the authors clarify this in the problem setup.\", \"In Table 1, why does \\\"LDM + CelebA\\\" have $\\\\times$ for both \\u201cOver-training\\u201d and \\u201cShifted Datasets,\\u201d while in the bottom table, \\\"LDM + CelebA\\\" (i.e., the third row) has $\\\\checkmark$ for both? Is this a typo, or have I misunderstood the notation?\", \"While I appreciate the authors\\u2019 efforts in benchmarking MIA methods in practical scenarios, I believe the paper\\u2019s analysis of the two challenges, \\u201cOver-training\\u201d and \\u201cShifted Datasets,\\u201d could be more in-depth. For example, I recommend adding an analysis of how shifted datasets impact MIA performance based on the distance of non-members from the target data (e.g., considering extremely close, moderately distant, and far distant non-members).\"], \"questions\": \"See Weakness part.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for raising the score\", \"comment\": \"We are sincerely grateful for your providing constructive advice on our paper and considering our rebuttal. The updated analysis does complement our discussion on how dataset shifts impact the evaluation of diffusion MIA. Thank you!\"}",
"{\"comment\": \"We thank the insightful review and would like to address the issue by points:\\n\\n[W1 & Q1] (difference from overfitting & our contribution) We agree that over-fitting and distribution shifts have been discussed by previous or concurrent works. However, there are differences between over-training of diffusion models and traditional over-fitting. Over-fitting in traditional models could be measured by train-test gaps [C] and is supposed to be a bad phenomena. However, as discussed in L155-L158, over-training (for hundreds of epochs) is necessary to achieve the best FID for small diffusion models. For example, the default training epochs of DDPM on CIFAR-10 is 2048 (=800k*128 / 50k) epochs. Using these default small models is the reason why current MIA evaluation tends to suffer from over-training. However, as shown by our experiments, MIA succeeding on over-trained models may have nothing to do with large-scale models without over-training, e.g. Stable Diffusion, while the latter constitutes the real world concern of MIA. This dilemma is kind of unique for diffusion models. We have mentioned this in our updated new draft in Section 3.\\n\\nFurthermore, the contribution of this work is more on rectifying current academic practice, that we aim at stopping the wrong trend of depending on flawful evaluation. As mentioned by the reviewer, there are lots of works discussing the over-fitting and distribution shifts in MIA of other models. However, new research in diffusion MIA seems to neglect these discussions and continues to use evaluation with over-training and dataset shifts. For example, one of our concurrent works (link: https://openreview.net/forum?id=LRSspInlN5) still uses DDPM+CIFAR10, DDPM+CIFAR100, and our setup (b) (flawful) as benchmarks. Hence, we believe it is significant and timely to explicitly alert the flaw of current MIA evaluation and call for stopping depending on it. In addition, we provide three prepared and plug-and-play benchmarks, our setup (c) (d) and (e), for realistic MIA evaluation. We do this by searching for all possible large-scale diffusion models with accessible and no-shift members and non-members. This helps future diffusion MIA research adapt swiftly to the real-world setup. We believe both of these two contributions are not trivial.\\n\\n[W2] (possible adjustments to improve MIA) To provide insights for future adjustments, we have conducted new experiments to demonstrate the correlation between quantified dataset shifts and MIA performance, which validates that with only dataset shifts we can make the MIA fail (Section 5.3). Also, we have updated the analysis of the reason for MIA failure and given brief insights on potential improvements (Appendix A.3.1). To briefly summarize the idea, due to the fact that we only train one step on one specific noise and time step, we cannot distinguish members based on losses with randomly sampled time steps and noises. Instead, we may need to locate the exact time step and noise that the model used for THAT training step.\\n\\nHowever, as discussed above, our main goal is to alert the fatal flaw in current diffusion MIA evaluation and provide a realistic benchmark. Designing novel methods is then out-of-the-scope of our primary focus. Hence, we would like to leave this for future work.\\n\\n[W3 & Q2] (new baselines) We thank the reviewer for the advice and feel sorry for missing these baselines. We have included these works in our references and are currently running experiments on them. However, it is notable that likelihood-based methods [A] perform similarly to loss-based methods and are not considered as baselines in recent MIA research. Additionally, [B] is a quantiled version of GSA in our baselines, which mainly focuses on improving efficiency. Hence, we believe these two baselines will not be exceptions for our conclusion on the diffusion MIA\\u2019s failure on real-world benchmarks.\\n\\n[Q3] (typos) We thank the reviewer and have fixed it in our updated new draft.\\n\\nAgain, we thank the reviewer for the insightful review. If you have further questions, feel free to contact us.\", \"references\": \"[A] Hu & Pang. Loss and likelihood based membership inference of diffusion models. In International Conference on Information Security. 2023.\\n\\n[B] Tang et al. Membership inference attacks on diffusion models via quantile regression. International Conference on Machine Learning. 2024.\\n\\n[C] Carlini et al. Membership inference attacks from first principles. 2022 IEEE Symposium on Security and Privacy. 2022.\"}",
"{\"comment\": \"We thank the insightful review and would like to address the issue by points:\\n\\n\\n[1] (Must over-training and dataset shifts occur in previous benchmarks?) We thank the reviewer for raising this meaningful question, for which our first draft does not provide a sound explanation. We would like to make a clarification as follows:\\n\\n\\nFirst, ticks mean that the benchmark does have the over-training or dataset shifts while crosses mean it does not. There is a typo that \\\"LDM + CelebA\\\" should have both dataset shift and over-training in Table 1 (upper).\\n\\n\\nSecond, it is not a must that an arbitrary benchmark using the model and the dataset in Table 1 (upper) suffers from over-training and dataset shifts. Table is used to show the setups of all existing benchmarks with the two drawbacks. The model and the dataset are considered as parts of the shown setups.\\n\\n\\nThird, however, while dataset shifts are easy to avoid (by using half of one dataset to train the model and another half as non-members), over-training is difficult to avoid for most small diffusion models. As mentioned in L155, training for hundreds of epochs is necessary to achieve the best FID for small datasets and diffusion models. This may be caused by the limit size of the dataset. Hence, most small diffusion models trained on academic datasets, including CIFAR-10, CelebA, and ImageNet, need over-training to converge. If we pick a small model without over-training, then the model could be unconverged and thus not qualified to benchmark MIA. That is why over-training is hard to avoid when using small diffusion models as MIA benchmarks. \\n\\n\\nAs a result, it is necessary to introduce large-scale diffusion models as real-world MIA benchmarks. These models enjoy the large size of the training dataset so that they could converge with only one epoch. Hence, MIA methods cannot rely on the memorization caused by over-training to easily distinguish members from non-members on these models. This is the main motivation of our work.\\n\\n\\n[2] (Varied setups) As shown in Table 1, previous benchmarks have included such setups with over-training or dataset shifts. As discussed at L206-L208, these setups are meaningless. MIA benchmarks with dataset shifts can be dominated by image distribution classifiers, which will immediately fail on no-shift benchmarks, while those with over-training turn the problem into over-fitting detection. MIA methods succeeding on these benchmarks do not truly infer membership and cannot be used in real-world scenarios. \\n\\n\\nThe main contribution of our work is that we 1) alert a problematic trend of current MIA research that methods compete on above meaningless benchmarks and 2) provide a practical real-world benchmark for future research to compete on. While we have included two flawful setups a) and b) in CopyMark, we believe there is no need to include more, because this is not relevant to our two contributions. \\n\\n[3] (in-depth analysis on dataset shift\\u2019s impact) We thank the reviewer for the advice and have conducted new experiments to quantitatively analyze how dataset shifts have impact on the MIA performance. Basically, our experiments have two parts:\\n\\n\\n- Quantifying dataset shifts (Section 5.1). We calculate three distance metrics between member and non-member datasets in our benchmark: normalized Wasserstein distance (NWD), Fr\\u00e9chet Distance (FD), and Mahalanobis Distance (MD). All distances are calculated based on CLIP-large. Among all setups covered, setup (a) and (b) in our benchmark suffer much bigger distances between members and non-members, for example, FDs of 0.32 and 0.24. This validates our conclusion that there are valid shifts between their members and non-members. Therefore, MIA methods could separate these two datasets according to the semantics rather than the membership, which raises the dataset shift concern in the paper. Setup (d) has medium distances between members and non-members, for example, a FD of 0.12. The distances are much smaller for Setup (c) and (e), for example, FDs < 0.10. This experiment quantitatively demonstrates the existence of dataset shifts and shows potential connections between dataset shifts and the MIA performance.\\n\\n- Relation between dataset shifts and MIA performance (Section 5.3). We construct a series of non-member datasets by mixing our two non-member datasets: COCO-val-2017 (with shifts to LAION) and LAION-MI (no shifts) with different proportions. We pick the proportions by 100% vs 0%, 75% vs 25%, 50% vs 50%, 25% vs 75%, and 0% vs 100%. We evaluate SecMI and PIA on these setups. The result in Figure 1 shows that there is positive correlation between the performance of these two MIA methods and the proportion of shifted non-members data. This shows that one can manipulate the result of MIA evaluation easily by only changing non-members, which is supposed to be irrelevant to the result and that current MIA evaluation is unreliable.\\n\\nIf you have further questions, feel free to contact us.\"}",
"{\"title\": \"Updating a new draft of the paper\", \"comment\": \"We thank all reviewers for the insightful reviews and have accordingly updated a new draft of our paper. We list the updated points, noted as blue text in our new draft, as follows:\\n\\n[1] Quantifying the dataset shift of benchmarks in CopyMark and showing the threshold between qualified / unqualified member and non-member datasets (Section 5.1).\\n\\n[2] Relation between different levels of dataset shifts and MIA performances (Section 5.3).\\n\\n[3] Fixing typos in Table 1.\\n\\n[4] Brief guidelines on how to address the challenges identified on existing MIA methods under realistic scenarios (Appendix 3.3.1).\\n\\n[5] Updating extra references.\\n\\n[6] Explaining the difference between over-training in diffusion models and traditional over-fitting (Section 3).\"}",
"{\"title\": \"Kindly request for further discussions\", \"comment\": \"Dear Reviewers,\\n\\nWe highly appreciate your constructive review and appreciate your constructive advice. To address these concerns, we have updated our paper draft and conducted in-depth analysis on the impact of dataset shifts with extensive experiments, which further validates our assertion. As the rebuttal period draws to a close, we sincerely look forward to your further ideas on these concerns and kindly request your engagement in the discussion. We also want to hear your advice on the updated content. We would sincerely appreciate it if you could consider engaging in further discussions.\\n\\nBest regards,\\n\\nAuthors of Submission7638\"}",
"{\"title\": \"Kindly request for further discussions\", \"comment\": \"Dear Reviewer,\\n\\nWe highly appreciate your constructive reviews that raised the concerns on the contribution of our paper. As we mentioned in the rebuttal, the main contribution of this paper is to alert a wrong trend of the current MIA research on diffusion models. While MIA benchmarks are not necessary to be defective, all existing benchmarks suffer from the two defects we presented and are continuously used in evaluating new methods. Hence, we believe it is necessary to point out the defects and provide a new fair benchmark. We believe our work is significant for the practice of MIA research in belief of the necessity of a fair benchmark. As the discussion phase is about to close, we will sincerely appreciate it if you can engage the discussion and provide further advice on our paper.\"}",
"{\"title\": \"Kindly request for further discussion\", \"comment\": \"Dear Reviewer,\\n\\nWe highly appreciate your constructive reviews that raised the concerns on the lack of discussion. To address this concern, we have added discussions on the following topics. 1) We quantifying the impact of dataset shifts on MIA performance. The result shows that by only replacing the non-member dataset the MIA performance will suffer a significant drop. Hence, we need to consider the hardest non-member dataset as the real-world setup. 2) We discussion potential directions of future improvement in the area. We believe these two extra discussions can effectively address your concerns. As the discussion phase is about to close, we will sincerely appreciate it if you can engage the discussion and provide further advice on our paper.\"}",
"{\"comment\": \"[Q4] (implications to promote future research) To provide insights for future adjustments, we have updated the analysis of the reason for MIA failure and given brief insights on potential improvements (Appendix A.3.1). To briefly summarize the idea, due to the fact that we only train one step on one specific noise and time step, we cannot distinguish members based on losses with randomly sampled time steps and noises. Instead, we may need to locate the exact time step and noise that the model used for THAT training step.\\n\\nHowever, as discussed above, our main goal is to alert the fatal flaw in current diffusion MIA evaluation and provide a realistic benchmark. Designing novel methods is then out-of-the-scope of our primary focus. Hence, we would like to leave this for future work.\\n\\n\\n Again, we thank the reviewer for the insightful review. If you have further questions, feel free to contact us.\"}",
"{\"title\": \"Thank you for your reply\", \"comment\": \"Thank you for your reply. The rebuttal effectively addresses my concerns, and I have adjusted my score to 6.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"We thank the insightful review and would like to address the issue by points:\\n\\n\\n[1] (x refers to image or image+prompt) x solely refers to the image. We do not consider the prompt because 1) prompts could be highly variable and easily modified in training, thus not being reliable conditions for MIA and 2) None of baseline methods claim strong dependency on prompts. If future methods have this strong dependency, we would like to update our benchmark and take prompts into consideration. Also, we thank the reviewer for the advice and have fixed this in the problem definition.\\n\\n\\n[2] (Typo in Table 1) Yes, it is a typo. We feel sorry for this typo and have fixed it in our new draft. \\\"LDM + CelebA\\\" has both dataset shift and over-training, for it uses two different datasets as members and non-members and trains the model for 500 epochs on the member dataset.\\n\\n\\n[3] (in-depth analysis on dataset shift\\u2019s impact) We thank the reviewer for the advice and have conducted new experiments to quantitatively analyze how dataset shifts have impact on the MIA performance. Basically, our experiments have two parts:\\n\\n- Quantifying dataset shifts (Section 5.1). We calculate three distance metrics between member and non-member datasets in our benchmark: normalized Wasserstein distance (NWD), Fr\\u00e9chet Distance (FD), and Mahalanobis Distance (MD). All distances are calculated based on CLIP-large. Among all setups covered, setup (a) and (b) in our benchmark suffer much bigger distances between members and non-members, for example, FDs of 0.32 and 0.24. This validates our conclusion that there are valid shifts between their members and non-members. Therefore, MIA methods could separate these two datasets according to the semantics rather than the membership, which raises the dataset shift concern in the paper. Setup (d) has medium distances between members and non-members, for example, a FD of 0.12. The distances are much smaller for Setup (c) and (e), for example, FDs < 0.10. This experiment quantitatively demonstrates the existence of dataset shifts and shows potential connections between dataset shifts and the MIA performance.\\n\\n- Relation between dataset shifts and MIA performance (Section 5.3). We construct a series of non-member datasets by mixing our two non-member datasets: COCO-val-2017 (with shifts to LAION) and LAION-MI (no shifts) with different proportions. We pick the proportions by 100% vs 0%, 75% vs 25%, 50% vs 50%, 25% vs 75%, and 0% vs 100%. We evaluate SecMI and PIA on these setups. The result in Figure 1 shows that there is positive correlation between the performance of these two MIA methods and the proportion of shifted non-members data. This shows that one can manipulate the result of MIA evaluation easily by only changing non-members, which is supposed to be irrelevant to the result and that current MIA evaluation is unreliable.\\n\\n\\n Again, we thank the reviewer for the insightful review. If you have further questions, feel free to contact us.\"}",
"{\"metareview\": \"The paper aims to evaluate the state-of-the-art MIAs on diffusion models and reveal critical flaws and overly optimistic performance estimates in existing MIA evaluation. The presentation is clear and the experiments are also good. Reviewers like this paper, but find many issues that can not be easily fixed in a short time, such as the lack of discussion and the lack of the evaluation and so on.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers like this paper, but find many issues that can not be easily fixed in a short time, such as the lack of discussion and the lack of the evaluation and so on.\"}",
"{\"summary\": \"This paper investigates the evaluation of state-of-the-art membership inference attacks (MIAs) on diffusion models in real-world scenarios. Specifically, it highlights flaws in current MIA evaluations, where over-training and dataset shifts lead to overestimated performance of the membership detection. To address this, the paper introduces a unified benchmark for MIAs on diffusion models, named CopyMark, which is built without over-training, using non-shifted datasets and blind testing. The experiments cover the recent loss-based MIA methods and classifier-based MIA methods, conducted on both defective setups and real-world setups. The results reveal that existing MIAs perform poorly on diffusion models in realistic scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow. It explains the flaws in existing MIA evaluations, i.e., over-training and dataset shifts, and is structured to understand these two problems through quantitative and qualitative analyses.\\n\\n2. This paper makes valuable thoughts about the limitations of current MIA evaluations on diffusion models. The significance of the proposed realistic evaluation for MIA is substantial, particularly in the context of AI copyright lawsuits and data privacy.\", \"weaknesses\": \"1. The originality of these two flaws, i.e., over-training and dataset shifts, remains a concern. Similar concepts like over-fitting and distribution shifts have been discussed in previous works (Carlini et al., 2022; Maini et al., 2024) on traditional deep learning models and large language models. This paper may potentially adapt the MIA setting to diffusion models while providing more assessments.\\n\\n2. Although the paper assesses existing MIA methods on diffusion models, it does not explore possible adjustments to improve MIA performance on CopyMark. For example, how to address the challenges identified on existing loss-based and classifier-based MIA methods and how to achieve better results under realistic scenarios.\\n\\n3. The evaluation may lack comprehensiveness as a benchmark, as the experiments are limited to loss-based and classifier-based MIA methods on diffusion models. Other types of MIAs, such as likelihood-based MIAs (Hu & Pang, 2023) and MIAs using Quantile Regression (Tang et al., 2024), are not included.\\n\\nCarlini et al. Membership inference attacks from first principles. 2022 IEEE Symposium on Security and Privacy. 2022.\\n\\nMaini et al. LLM Dataset Inference: Did you train on my dataset? arXiv preprint arXiv:2406.06443. 2024.\\n\\nHu & Pang. Loss and likelihood based membership inference of diffusion models. In International Conference on Information Security. 2023.\\n\\nTang et al. Membership inference attacks on diffusion models via quantile regression. International Conference on Machine Learning. 2024.\", \"questions\": \"1. How do the issues of over-training and dataset shifts differ between diffusion models and traditional deep learning models or large language models? Will the proposed realistic scenarios similarly reduce MIA effectiveness on these other model types?\\n\\n2. How do MIA methods based on likelihood and quantile regression perform on diffusion models in the proposed realistic scenarios? Will their performance also see a significant reduction?\\n\\n3. Minor point: a typo \\u201crandomlyy\\u201d in the third paragraph of section 4.3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Kindly request for further discussions\", \"comment\": \"Dear Reviewer,\\n\\nWe highly appreciate your constructive reviews that raised the concerns on the contribution of our paper. As we mentioned in the rebuttal, the main contribution of this paper is to alert a wrong trend of the current MIA research on diffusion models. While MIA benchmarks are not necessary to be defective, **all existing benchmarks** suffer from the two defects we presented and are continuously used in evaluating new methods. Hence, we believe it is necessary to point out the defects and provide a new fair benchmark. In addition, we conducted two experiments to quantitatively demonstrate how changing dataset shifts could manipulate the performance of diffusion MIAs. We believe these could effectively address your concerns in the advanced comprehension of dataset shifts. As the discussion phase is about to close, we will sincerely appreciate it if you can engage the discussion and provide further advice on our paper.\"}",
"{\"summary\": \"The paper proposed a simple but effective benchmark for evaluating the existing MIA\\u2019s performance on the pre-trained diffusion models for the data authorization problem. The authors first found that \\u201covertraining\\u201d and \\u201cdataset shifts\\u201d are two major defects of the existing MIA methods. Then, to overcome the two challenges, the authors proposed a benchmark that incorporates five different experimental setups, where the last three avoids the dataset shifting problem by using members and non-members from the same distributions, and over-training problem by only considering pre-trained models training for 1 epoch.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Presentation is good and easy to follow.\", \"The addressed problem is meaningful.\"], \"weaknesses\": [\"I am confused about the upper part of Table 1. What do the \\u201c\\u2705\\u201d and \\u201c\\u274c\\u201d symbols represent in each entry? Additionally, are \\u201cOver-training\\u201d and \\u201cShifted Datasets\\u201d considered issues in each experimental setup (e.g., is over-training a problem in the DDPM + CIFAR10 setup)? If so, why is over-training necessarily a problem for DDPM + CIFAR10? I believe this only holds when certain factors, like training epochs, are fixed as you reported in the common setting; otherwise, this claim seems overstated.\", \"Could the benchmark allow for more varied experimental setups\\u2014for instance, having no dataset shift but including over-training? A simple example could involve training a DDPM on the CIFAR10 training set and using the CIFAR10 test set as non-members, which would meet the no-shift criterion.\", \"Furthermore, the concept of \\u201cdataset shift\\u201d is somewhat unclear to me. The benchmark assumes there\\u2019s no distribution shift when two datasets come from the same source. I suggest the authors delve deeper into this by considering metrics to quantify dataset distance (distribution distance), such as the Wasserstein distance.\"], \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper presents a novel approach to assessing MIA on diffusion models by introducing a new benchmark called CopyMark. This benchmark aims to provide a realistic and unbiased environment for testing the effectiveness of MIAs against these models. The study underscores the potential overestimation of MIA effectiveness due to biased experimental setups in previous research and argues for a more nuanced understanding and evaluation of MIAs in practical applications. The paper pinpoints that current MIAs on diffusion models are not trustworthy tool to provide evidence for unauthorized data usage in diffusion models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Good and significant topic. The paper identifies a critical gap in the evaluation of MIAs, offering a novel approach to benchmarking that could reshape how these attacks are studied, and providing valuable insights that can influence the future research.\\n2. Comprehensive experiment. The experiments conducted are extensive, providing evidence that challenges the overestimation of MIA effectiveness on diffusion models.\", \"weaknesses\": \"1. Lack of discussion. The discussion on the practical implications of the findings is somewhat superficial and lacks depth in Section 6, particularly in how these results could influence real-world security strategies.\", \"questions\": \"The paper aims to construct a real-world benchmark, pinpointing the current limitation of MIA setups, specifically the unknown distribution of members and non-members in real-world MIAs. It is reasonable that a newly proposed benchmark can cause current methods to yield poor performance. However, I find the discussion lacking in adequately demonstrating how this benchmark accurately reflects real-world settings from my perspective. In my opinion, additional evidence and a more thorough discussion would strengthen this aspect.\\n\\nIn the evaluation setup part, the paper mentions that (d) has a slight data shift but is more minor than other settings. Can you provide further insight into how minor dataset shifts were quantified and their potential impact on the validity of MIA results? It would be beneficial to have a more detailed analysis of how significant these shifts need to be in impacting the effectiveness of MIAs. What thresholds for dataset similarity were considered, and how were they determined?\\n\\nThe paper demonstrates that current MIAs are less effective under realistic conditions on diffusion models. How do you envision these findings being applicable to other types of generative models? Are there specific characteristics of diffusion models that may limit the generalizability of the results? A discussion on this could clarify potential broader applications of your findings.\\n\\nThis paper conducts a comprehensive experiment and concludes that the current MIAs on diffusion models do not perform well in real-world scenarios. However, I think the discussion part is relatively superficial and requires a deeper analysis based on the experimental results. Can you provide more implications and extend the discussion to promote future research?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We thank the insightful review and would like to address the issue by points:\\n\\n\\n[Q1] (Why our benchmark is realistic) Our benchmark is more realistic than existing benchmarks, because we use models without over-training and non-member datasets with minor or without shifts to member datasets. We would like to summarize the reasons as follows:\", \"dataset_shifts\": \"Membership inference is supposed to only depend on membership. If we use a cat datasets to train a model and use a dog dataset as non-members for evaluation, an image classifier can easily separate these two datasets. However, this is not membership inference, and will immediately fail when we use another hold-out cat dataset as non-members. According to our new experiments in Section 5.1, our datasets do have smaller dataset shifts. Also, Section 5.3 shows that by only tuning the dataset shifts, we can manipulate the evaluation result. This means that current evaluation with big dataset shifts is not fair and realistic.\", \"over_training\": \"over-training diffusion models for hundreds of epochs on small datasets make the loss of members conspicuously lower than that of non-members. However, real-world security and privacy scenarios mostly involve large-scale diffusion models that are only trained for one epoch on the training dataset. MIA succeeding on over-training models may have nothing to do with these models in the real-world application. Our benchmark uses pre-trained models with only one epoch, which is the minimum training epoch, thus getting rid of over-training.\\n\\n\\n[Q2 & W1] (in-depth analysis on dataset shift\\u2019s impact) We thank the reviewer for the advice and have conducted new experiments to quantitatively analyze how dataset shifts have impact on the MIA performance. Basically, our experiments have two parts:\\n\\n\\n- Quantifying dataset shifts (Section 5.1). We calculate three distance metrics between member and non-member datasets in our benchmark: normalized Wasserstein distance (NWD), Fr\\u00e9chet Distance (FD), and Mahalanobis Distance (MD). All distances are calculated based on CLIP-large. Among all setups covered, setup (a) and (b) in our benchmark suffer much bigger distances between members and non-members, for example, FDs of 0.32 and 0.24. This validates our conclusion that there are valid shifts between their members and non-members. Therefore, MIA methods could separate these two datasets according to the semantics rather than the membership, which raises the dataset shift concern in the paper. Setup (d) has medium distances between members and non-members, for example, a FD of 0.12. The distances are much smaller for Setup (c) and (e), for example, FDs < 0.10. This experiment quantitatively demonstrates the existence of dataset shifts and shows potential connections between dataset shifts and the MIA performance.\\n\\n- Relation between dataset shifts and MIA performance (Section 5.3). We construct a series of non-member datasets by mixing our two non-member datasets: COCO-val-2017 (with shifts to LAION) and LAION-MI (no shifts) with different proportions. We pick the proportions by 100% vs 0%, 75% vs 25%, 50% vs 50%, 25% vs 75%, and 0% vs 100%. We evaluate SecMI and PIA on these setups. The result in Figure 1 shows that there is positive correlation between the performance of these two MIA methods and the proportion of shifted non-members data. This shows that one can manipulate the result of MIA evaluation easily by only changing non-members, which is supposed to be irrelevant to the result and that current MIA evaluation is unreliable.\\n\\n\\n[Q3] (generalization of our conclusion) We notice that there are existing works discussing the failure of MIA on other generative models, e.g. LLMs. However, MIA of LLMs seems to never fall into the hallucination of success as that of diffusion models do. This is because over-training is the default setup of training small diffusion models to achieve the best FID (L155-158 in our new draft). In other words, there is an intrinsic gap between MIA of small diffusion models and that of large-scale diffusion models in the real-world application. We believe this is unique for diffusion models. The main contribution of this work is also to rectify current academic practice in diffusion MIA, to stop the wrong trend of depending on evaluation pipelines far from real-world applications, and to provide a sound and realistic benchmark.\"}"
]
} |
EDoD3DgivF | On Linear Representations and Pretraining Data Frequency in Language Models | [
"Jack Merullo",
"Noah A. Smith",
"Sarah Wiegreffe",
"Yanai Elazar"
] | Pretraining data has a direct impact on the behaviors and quality of language models (LMs), but we only understand the most basic principles of this relationship. While most work focuses on pretraining data's effect on downstream task behavior, we investigate its relationship to LM representations. Previous work has discovered that, in language models, some concepts are encoded "linearly" in the representations, but what factors cause these representations to form (or not)? We study the connection between pretraining data frequency and models' linear representations of factual relations (e.g., mapping France to Paris in a capital prediction task). We find evidence that the formation of linear representations is strongly connected to pretraining term frequencies; specifically for subject-relation-object fact triplets, both subject-object co-occurrence frequency and in-context learning accuracy for the relation are highly correlated with linear representations. This is the case across all phases of pretraining, i.e., it is not affected by the model's underlying capability. In OLMo-7B and GPT-J (6B), we discover that a linear representation consistently (but not exclusively) forms when the subjects and objects within a relation co-occur at least 1k and 2k times, respectively, regardless of when these occurrences happen during pretraining (and around 4k times for OLMo-1B). Finally, we train a regression model on measurements of linear representation quality in fully-trained LMs that can predict how often a term was seen in pretraining. Our model achieves low error even on inputs from a different model with a different pretraining dataset, providing a new method for estimating properties of the otherwise-unknown training data of closed-data models. We conclude that the strength of linear representations in LMs contains signal about the models' pretraining corpora that may provide new avenues for controlling and improving model behavior: particularly, manipulating the models' training data to meet specific frequency thresholds. We release our code to support future work. | [
"pretraining data",
"pretraining",
"linear",
"linear features",
"interpretability",
"linear representations",
"corpus frequency"
] | Accept (Poster) | https://openreview.net/pdf?id=EDoD3DgivF | https://openreview.net/forum?id=EDoD3DgivF | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yVWpAxbRGG",
"wsIEkTLmty",
"v0gJ7xq2T7",
"uzJq4iDSLy",
"uUrdUrKOry",
"tAmGsdqDwY",
"ifNDPVkPlg",
"ibwjOiaDON",
"iDdemiopHl",
"gGkmVt1FXz",
"aKSsTjZNT1",
"Y5T089rA91",
"WBXOCMWrsy",
"RXetXmZRSM",
"RBl8RopGsQ",
"KQk1j3RbNh",
"Izr4XRXvUd",
"HWThuj9qcU",
"GdRNFNAszi",
"EyPnkFzdDJ",
"686aqnw25r",
"1mcdk30Yrv"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment"
],
"note_created": [
1732584458984,
1732500356782,
1730717034637,
1732245315216,
1732587285630,
1732586790509,
1730239488043,
1733173686016,
1734762673654,
1732245113950,
1732245046290,
1732245288280,
1730690083707,
1732587083793,
1732661148768,
1730529540967,
1732606881004,
1732245329006,
1732661730563,
1732663967586,
1737524202643,
1732244997344
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_juu2"
],
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_eSj4"
],
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_eSj4"
],
[
"ICLR.cc/2025/Conference/Submission12604/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_juu2"
],
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_juu2"
],
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_juu2"
],
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_juu2"
],
[
"ICLR.cc/2025/Conference/Submission12604/Area_Chair_X45U"
],
[
"ICLR.cc/2025/Conference/Submission12604/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12604/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12604/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_J8rP"
],
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_juu2"
],
[
"ICLR.cc/2025/Conference/Submission12604/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_zXUa"
],
[
"ICLR.cc/2025/Conference/Submission12604/Reviewer_J8rP"
],
[
"ICLR.cc/2025/Conference/Submission12604/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12604/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12604/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12604/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"> Being able to predict the frequencies of individual terms as well as the co-occurrence seems to be a direct implication of high correlation and therefore does not sound like a major standalone contribution\\n\\nThanks for the explanation. I agree that \\\"even this simple correlation relationship has not been previously shown in previous work\\\", and I think this main finding of the paper is very interesting, as I said in the strengths. The point I am trying to understand here is, high-correlation seems to imply that you can \\\"predict an output variable based on input variables\\\". Consider a case where two variables are nearly perfectly correlated, isn't it obvious that one can fit a linear model that takes one as input and another as output? I understand the other way around is not necessarily true: being able to fit a complicated regression model doesn't imply that two variables are highly correlated, e.g. y=sin(x). Correct me if I misunderstood some of your arguments.\"}",
"{\"comment\": \"Thank you for the rebuttal and the additional details. I find that this is a solid and dedicated contribution to a specific issue, and the related discussion was useful, which increased my contribution score. I do not have a specific set of experiments in my mind for extending the broader impact, but maintain my overall borderline positive standing.\"}",
"{\"summary\": \"The authors investigate the correlation between linear representations and pre-training data frequency in language models. The work is conducted on recent findings that the linearity of different types of relations varies significantly depending on the specific relationship. Existing work does show that language model exhibit such linear structures, but do not reveal the underlying reason why some relations exhibit such structure while other do not. The main contribution of this work is to empirically draw the correlation between such linear structure and data frequency. It shows that that linear representations for factual recall relations are related to mention frequency and the model size. In addition, more detailed results show that linear representations form at predictable frequency thresholds during training, which allows the prediction of term frequencies in the training data. Finally, the authors release a tool for searching through tokenized text for understanding training data characteristics.\\n\\nOverall, the findings are insightful for understanding linear representation structures in language models. This empirical study complements existing theoretical evidence on the same subject. It provides a perspective to the problem, which can be among many other factors in driving the formation of linear structures. On the utility side, the findings can be used for understanding training data, which are typically not published for current LLMs.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"A perspective for understanding the reason that some features from LLMs demonstrate linear structures while others do not.\\n\\nA tool for search through tokenized text to support the understanding of training data.\", \"weaknesses\": \"It provides one perspective to the problem empirically, with a specific set of metrics. While giving useful information, the depth of understanding and the utility domain is constrained mostly to the correlation between term frequency, the model size and the linear structure.\", \"questions\": \"Could there be some theoretical discussion on the training dynamics and the frequency thresholds?\\n\\nOne related work is \\n\\nGuangsheng Bao, Zhiyang Teng, and Yue Zhang. 2023. Token-Level Fitting Issues of Seq2seq Models. In Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP) at ACL 2023. Toronto, Canada from July 9th to July 14th, 2023.\\n\\nwhich also discusses the correlation between term frequencies and model accuracy.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Continued\", \"comment\": \"> \\\"Importantly, this regression model generalizes beyond the specific LM it was trained on without additional supervision.\\\": the prediction results seem to be very noisy and not much better than the mean frequency baseline.\\n\\nWe\\u2019d like to point out that we get negative results for subj-obj co-occurrence prediction which is a harder task. We revise this in section 5.3, but it\\u2019s likely that this problem requires more contextual information to get any reasonable performance. Another reason is that the data is highly concentrated around the mean, with a few outliers, which is demonstrated by the high accuracy with predicting the mean in this case.\\n\\n\\n> , the fact that this threshold is predictable regardless of when this frequency threshold is met is not well supported by results. It is necessary to show that the threshold (mean causality >.9) is consistent across different checkpoints.\\n\\nThere seems to be a miscommunication around these results, because that is what our findings demonstrate. Consider the red triangle in the top right corner of the OLMo 7B graph in Figure 2: this is the country-largest-city relation for the checkpoint at only 10k steps. It is highly frequent and has a perfect causality score of 1.0. Considering only the red dots (the 10k step checkpoints), the correlation is clearly still visible, however with regards to the reviewer\\u2019s other point:\\n\\n> It would be nice if you could arrange results into different scatter plots for each pretraining stage and compare them, say by fitting a linear model and comparing their slopes and biases.\\nThis is good feedback, and we will add this to make our point much clearer. The reviewer raised a few points of concern with regards to this figure, and we believe this addresses them.\\n\\n\\n> The results in Figure 3 and Table 1 do not match:\\n> The author should explain how are the numbers related to each other.\\n> Table 1: \\\"Train OLMo\\\" and \\\"Train GPT-J\\\" are hardly self-explanatory, the authors should consider better ways to explain the settings.\", \"there_are_differences_in_these_settings\": \"\\u2018Train OLMo\\u2019 refers to the setting where we fit the regression on OLMo data and evaluate it on GPT-J data. In this setting we are explicitly testing robustness to the difficult setting of heldout relations and a heldout model. In Figure 3 we are showing the setting of fitting and evaluating on heldout relations on the same model. Note that we must filter the datasets so that there is no overlap between seen examples from relations, and must only consider data that appears in both the PILE and Dolma, thus making the baselines that we are comparing completely different. We can discuss the differences more in the paper/appendix. Please let us know if this is still unclear, but essentially, these baselines aren\\u2019t meant to be compared.\\n\\n\\n>> Table 2: 1) \\\"Predictions are better for relations that are closer to those found in fitting the relation (country-related relations)\\\" What does closer mean here? How did you measure this?\\n\\nHere \\u201ccloser\\u201d is a qualitative term describing the output domain of the relations trained vs. tested on. For example, fitting the regression on relations that output people\\u2019s names vs. constellation names. This is a fair point to raise, we have updated the paper to better explain these terms.\\n\\n>> Are there aggregated numbers of all pairs? How many of them have errors less than 5%?\\n\\nThis is referring to a drop in 5% accuracy from OLMo (70%) to GPT-J (65%), i.e., performance is maintained across this model pair. We\\u2019ve updated this to say 5% within-magnitude accuracy.\", \"to_summarize_the_revisions_based_on_these_comments\": \"We improve the explanations for the differences in settings for the regression results and provide clearer breakdowns of the datasets used.\\n\\n\\n>> The experiments follow previous work and only analyze 25 relations. What are the reasons for not including other relations?\\n\\nIn response to this and other reviewer comments, we will include an analysis on commonsense relations as well, which may provide additional insights on how/when linear representations form. We currently have the counts for these, and will report the results on these relations in the upcoming days\\n\\n>> Section 4.3 is interesting to some degree but I am not sure about the implication of the results. Looks like it is just a description of what is observed. What is the research question you want to answer here?\\n\\nThe specific research question is \\u201cHow does accuracy relate to the presence/absence of linear representations of relations?\\u201d. We were surprised that this trend had not been reported anywhere before, and found the relationship to be quite strong, however it is still unclear whether a linear representation causes or is necessary for high performance. The answer to this question has important implications for measuring specific model capabilities, so we wanted to highlight what we found as a starting point for future work.\"}",
"{\"comment\": \"It is possible to compare the efficiency between the proposed tool to WIMBD? Actually this is my original question. There is nowhere in the paper mentioned sliding window or naive search with np.where, so i wasn't sure about what you mean efficient.\"}",
"{\"comment\": \"> There seems to be a miscommunication around these results, because that is what our findings demonstrate. Consider the red triangle in the top right corner of the OLMo 7B graph in Figure 2: this is the country-largest-city relation for the checkpoint at only 10k steps. It is highly frequent and has a perfect causality score of 1.0. Considering only the red dots (the 10k step checkpoints), the correlation is clearly still visible, however with regards to the reviewer\\u2019s other point: It would be nice if you could arrange results into different scatter plots for each pretraining stage and compare them, say by fitting a linear model and comparing their slopes and biases. This is good feedback, and we will add this to make our point much clearer. The reviewer raised a few points of concern with regards to this figure, and we believe this addresses them.\\n\\nCan you add the figure to the paper or appendix? Currently, Figure 2 and its caption are hard to understand. Specifically: \\\"why are some points darker than the others? What do the lines (light grey and dark grey lines) mean? Also, why do all dots for GPT-J have the same shape while the dots for the 2 left plots do not? What do different shapes mean (should be either explained in the caption or represented in the legend)?\\\"\\n\\n> This is referring to a drop in 5% accuracy from OLMo (70%) to GPT-J (65%), i.e., performance is maintained across this model pair. We\\u2019ve updated this to say 5% within-magnitude accuracy.\\n\\nThanks for clarifying, but 70% (Figure 1) and 65% (Table 1) are not mentioned anywhere this paragraph and the mention of Table 2 surrounding makes it even more misleading. Consider referring to the numbers more directly.\\n\\n> The specific research question is \\u201cHow does accuracy relate to the presence/absence of linear representations of relations?\\u201d. We were surprised that this trend had not been reported anywhere before, and found the relationship to be quite strong, however it is still unclear whether a linear representation causes or is necessary for high performance. The answer to this question has important implications for measuring specific model capabilities, so we wanted to highlight what we found as a starting point for future work.\\n\\nThe reason I am confused about the section is that existing work ([1] and the ones you cited) have already shown that the incontext learning accuracy on certain tasks has a strong correlation to the pretraining term frequencies. And in this work you show that \\\"the presence/absence of linear representations\\\" is strongly correlated to frequencies as well. Then isn't it already implied that there should be some correlation between the accuracy and \\\"the presence/absence of linear representations\\\"? And the an obvious hypothesis for \\\"however it is still unclear whether a linear representation causes or is necessary for high performance\\\" is the pre-training term frequency is the common cause.\\n\\n[1] Razeghi Y, Logan IV R L, Gardner M, et al. Impact of pretraining term frequencies on few-shot reasoning[J]. arXiv preprint arXiv:2202.07206, 2022.\"}",
"{\"summary\": \"This paper finds that the formation of linear representations for factual recall relations in LMs is highly correlated with the frequency of subject-object cooccurrence in the pretraining data. The formation of linear representation can happen at any stage of pretraining as long as the subj-obj cooccurrence exceeds some threshold, i.e., a linear representation can form consistently when the subjects and objects co-occur at least 1-2k times even at early stages of pretraining. The results also indicate that the frequency threshold is related to the model size, and larger models tend to require smaller thresholds. Using the metrics that evaluate the quality of linear representations, the authors can predict the approximate frequencies of individual terms as well as the co-occurrence of terms in the pretraining corpus better than using the LMs' uncertainty alone.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper draws an interesting connection between pretraining term frequency to the formation of linear representation of factual recall relations. The fact that the formation of linear representations could happen at any pretraining stage is particularly intriguing. The experiments and results are easy to understand and the discussion of related work is comprehensive.\", \"weaknesses\": [\"Being able to predict the frequencies of individual terms as well as the co-occurrence seems to be a direct implication of high correlation and therefore does not sound like a major standalone contribution. Also, \\\"Importantly, this regression model generalizes beyond the specific LM it was trained on without additional supervision.\\\": the prediction results seem to be very noisy and not much better than the mean frequency baseline.\", \"Some claims are not properly justified:\", \"Line 75: \\\"This frequency threshold decreases with model size\\\": This is only tested two model sizes (OLMo-7B and OLMo-7B), and the fact that GPT-J (6B) has as smaller threshold than OLMo-7B is an counterexample for this. Would be good just to be consistent with the rest of discussion to claim there is a connection to scale.\", \"Line 93-94: \\\"Linear representations form at predictable frequency thresholds during training, regardless of when this frequency threshold is met for the nouns in the relation.\\\" The term \\\"predictable\\\" can be understood as there is a strong correlation between the linear representation quality and the co-occurrence frequency. However, the fact that this threshold is predictable regardless of when this frequency threshold is met is not well supported by results. It is necessary to show that the threshold (mean causality >.9) is consistent across different checkpoints.\", \"Line 319-320: \\\"Regardless of pretraining step, models that surpass this threshold have very high causality scores.\\\" It would be nice if you could arrange results into different scatter plots for each pretraining stage and compare them, say by fitting a linear model and comparing their slopes and biases.\", \"Line 100: The efficiency of the proposed searching tool is not well discussed.\", \"Line 455: \\\"Some relations, like star-constellation perform very poorly, possibly due to low frequency\\\" Why low frequency is the cause?\", \"Line 471: \\\"Second, evaluating on the LRE features of a heldout model (scaled by the ratio of total tokens trained between the two models) maintains around the same accuracy,\\\" How do the results support \\\"around the same accuracy\\\"? If it is comparing Train OLMo and Train GPT-J in Table 1, the drop of accuracy is larger than the performance gap between LRE features and mean baseline. I am not sure if this entails \\\"around the same accuracy\\\".\", \"Line 483: \\\"In general, the regression transfers well, without performance deteriorating much (about 5%), suggesting LREs are encoding information in a consistent way across models.\\\" What results support this? Table 2 only shows a few examples which is insufficient for supporting the claim. Are there aggregated numbers of all pairs? How many of them have errors less than 5%?\", \"Some important details are missing from the experiments\", \"The results in Figure 3 and Table 1 do not match: 1) why is the mean freq. baseline performance different? 2) why do LRE features (Table 1: 0.76) seem to perform better than LRE + LM (Figure 3: ~0.67) for OLMo, if Figure 3 shows the results for OLMo. The author should explain how are the numbers related to each other.\", \"Table 2: 1) \\\"Predictions are better for relations that are closer to those found in fitting the relation (country-related relations)\\\" What does closer mean here? How did you measure this? 2) \\\"Some relations, like star-constellation perform very poorly, possibly due to low frequency\\\"\", \"Some figures and tables need to be more carefully explained:\", \"The two left plots in Figure 2 need more explanations or presented in a better way. Specifically, why are some points darker than the others? What do the lines (light grey and dark grey lines) mean? Also, why do all dots for GPT-J have the same shape while the dots for the 2 left plots do not?\", \"Table 1: \\\"Train OLMo\\\" and \\\"Train GPT-J\\\" are hardly self-explanatory, the authors should consider better ways to explain the settings.\"], \"questions\": \"1. The experiments follow previous work and only analyze 25 relations. What are the reasons for not including other relations?\\n2. Section 4.3 is interesting to some degree but I am not sure about the implication of the results. Looks like it is just a description of what is observed. What is the research question you want to answer here?\\n3. Typos:\\n 1. Line 455 the regression model can \\\"be\\\" sensitive to ...\\n 2. Line 416: the paragraph does not end properly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I appreciate the authors' response. Many of my concerns have been addressed. Although I still think many of the discussions and analyses are a bit repetitive (in-context learning accuracy, and predicting how often a term was seen in pertaining), the paper indeed provides interesting insights. Therefore I have increased my score accordingly.\"}",
"{\"metareview\": \"his paper explores the Linear Representation Hypothesis in LLMs. Specifically, it aims to demonstrate a correlation between the cooccurrences of a subject and object and their relationship being represented in a linear way. The paper establishes thresholds in different models beyond which this happens. It then looks to go the other way and predict pre-training frequencies given representations. Adding LRE features improves performance here.\\n\\nThe core question of this paper is very timely and interesting. The paper is clearly written and presents what I would characterize as medium-strong evidence of the correlation between frequency and linearity. The types of representations formed by LLMs and what factors of training cause them to emerge is a very relevant question. This paper advances the state-of-the-art in our understanding of these points.\\n\\nThe biggest issue with this paper is the scope and impact of the results. juu2 brings up the transfer to another LLM; the results in Table 1 aren't all that strong. The restriction to a particular set of relations (and to KG relations in general) is also somewhat limiting. I don't think there's one right answer for how much the paper needs to engage with on this front, but making stronger and more general claims would of course make it stronger. As a more minor point, juu2 points out that connections of the paper's hypotheses with model size can't necessarily be drawn from the given data.\\n\\nTaken together, all of this contributes to an impression that this paper has some important and useful results, but it might not be the last word on this topic. A scaled-up set of experiments, scaled out to different settings, may find something new and different here.\\n\\nFinally, the paper explores the same question in a few different ways. For instance, the ability to predict the frequency is mostly a consequence of correlation (juu2), which lessens its impact a bit.\", \"additional_comments_on_reviewer_discussion\": \"J8rP points out limitations of the task scope and task formatting, which are somewhat addressed in the response and new experiments.\\n\\njuu2 brings up a number of points about the presentation and interpretation of the results, including the points mentioned above about correlation. Most of these presentational concerns are addressed.\"}",
"{\"title\": \"Thank you for the review and suggested discussion points\", \"comment\": \"Thank you for the detailed review and we\\u2019re glad the reviewer views our data analysis tool as a promising way to inspect closed-data LMs and that the findings are valuable. We are happy to elaborate on the points brought up here:\\n\\na.) The reviewer brings up the generalizability of these findings to other representational forms (linear, affine, non-linear). LREs have a particularly nice property in that they can capture relationships encoded as affine, linear, or translation transformations. We don\\u2019t explore constraining LREs in any particular way here (see, e.g., the translation baseline in the Hernandez et al. paper to get an idea which ones work with only a bias term). We agree more discussion about non-linear features will be useful, and have added some discussion about Csordas et al., 2024 (https://arxiv.org/abs/2408.10920v1) in lines 513--514, which finds evidence against the strong version of the linear representation hypothesis. They find an example of non-linear representations in a recurrent network. Our specific question is about how/when linear representations form, and want to be clear that we don\\u2019t rule out the existence of more complex features.\\n\\nb.) The reviewer brings up the relatively lower accuracies for predicting subject-object co-occurrences from representations compared to predicting object only frequencies. We indeed find positive results for predicting object occurrences and negative results for predicting subject-object occurrences from LRE features. Besides the fact that it is a much more difficult problem than predicting object occurrences alone, we can add more discussion on why we think this is. One reason this might be the case is that the distribution of subj-obj occurrences is very tight, with a few large outliers. For example, with language-of(Russia)=Russian, Russia and Russian co-occur 1.2M times, far outside the mean of around 100k for this relation. This makes it so the model can get good accuracy (near 70%) without capturing outliers. In terms of additional features that could be helpful: the pointwise mutual information between subject and object from a reference dataset may improve performance.\\n\\nc.) In response to using other relation types: Although we constrain our analysis to factual relations, these capture quite a large range of topics that would be memorized by models (geography, companies, familial relationships, occupations, media knowledge, etc.). However, to broaden our approach, we are adding analysis of commonsense relations as well (task-done-by-tool, fruit-inside-color, etc.). We think these might point to an interesting reporting bias, as some of these relations would be predicted as having low subj-obj co-occurrences (Paik et al., 2021 https://aclanthology.org/2021.emnlp-main.63/) . We appreciate this suggestion and will add these results in the coming days\\n\\nd.) see below:\\n\\n>>\\u201dother factors, such as the context in which terms appear, the syntactic structure of sentences, or the semantic relationships between words, could also influence the formation of linear representations\\u201d\\n\\n We agree with this point and have added discussion in the Limitations section. In short, given the consistency of the \\u2018linearization\\u2019 across relations, we would predict that this has minimal impact. Still, we will leave room to describe possible variables we don\\u2019t account for.\"}",
"{\"title\": \"Thank you for the review. Update on relations tested\", \"comment\": \"Thank you for your review and your questions, we\\u2019re glad the reviewer finds the work interesting and the results important.\\n\\n>>The scope of the work is somewhat limited, as only 25 factual relations are investigated. It is unclear whether the identified correlation is also valid for other relation types. Expanding the analysis to include more factual relations and other types of relations could further enhance the robustness of the findings and potentially offer additional insights.\\n\\nWhile we believe the 25 relations is not particularly limited (and covers a wide range of domains: geography, companies, familial relationships, occupations, media knowledge, etc.) we agree more relations will be helpful. We focus on factual relations because using subj-obj. Co-occurrences as a proxy for mentions is most strongly motivated by prior work (Elsahar et al., 2018), but with the relationship we have currently established, it will be interesting to expand no that.\\n We are expanding our analysis to the commonsense relations in the Relations dataset. These are relations like \\u201ctask done by tool\\u201d (shovels are used to dig, e.g.). We have collected the counts for the 8 commonsense relations from this dataset, and will post the analysis in our rebuttal in the next few days.\\n\\n\\nWhether this property holds outside of ICL templates, we know that they do in the zero-shot setting (if this could be considered non-ICL, the template is still technically the same). We have these results but did not include them in the paper because it\\u2019s shown in prior work (Hernandez et al., 2024)\"}",
"{\"title\": \"Thank you for the review, please see clarifying comments\", \"comment\": \"Thank you for the in depth review. We\\u2019re glad the reviewer found the work interesting and easy to follow.The reviewer raises some good points, but we\\u2019d also like to clear some points up that seem to be misconceptions about the work. We\\u2019ve begun making some changes to make a few things more clear.\\n\\n\\n>> Being able to predict the frequencies of individual terms as well as the co-occurrence seems to be a direct implication of high correlation and therefore does not sound like a major standalone contribution\\n\\nThis is not necessarily true. Correlation tells us \\u201chow related are two variables?\\\", while regression allows us to answer \\\"How can we predict an output variable based on input variables?\\\". Also, note that we capture more complex non-linear relationships with a more complex regression model (like the random forest used here - lines 361-362). Therefore, we can relate multiple input variables, like task hardness (measured by accuracy and log probs) in addition to LRE features. In addition, even this simple correlation relationship has not been previously shown in previous work (Hernandez et al., 2024; Jiang et al., 2024; see related work on linear representations) because it is typically difficult to count individual tokens in a pretraining corpus (especially throughout training). We are able to demonstrate as a proof of concept that the representations themselves reflect frequencies in the pretraining corpus, thus laying out future work on determining pretraining data exposure with more sophisticated measures.\\n\\n>> Line 75: \\\"This frequency threshold decreases with model size\\\": This is only tested two model sizes (OLMo-7B and OLMo-[1]B), and the fact that GPT-J (6B) has as smaller threshold than OLMo-7B is an counterexample for this. Would be good just to be consistent with the rest of discussion to claim there is a connection to scale.\\n\\nThis is a fair point, and we have softened the claim about where this threshold lies across models. However, across relations within model checkpoints we have strong evidence for some threshold existing, even if we can\\u2019t derive a model agnostic, scale derived threshold. We will highlight that it\\u2019s inconclusive where this threshold lies on any given model, but we report a few data points that point to some trend existing. At the same time, we can confidently say that frequency strongly correlates with linear representations forming.\"}",
"{\"summary\": \"This paper explores the question of why linear structures form in LLMs by investigating the connection between training data frequency and the formation of linear representation, focusing specifically on factual recall relations. The study reveals that (1) the formation of linear representations is strongly correlated with subject-object co-occurrence frequency, and (2) the presence of linear representations can help predict relation frequency. Experiments are conducted using OLMo-1B, oLMo-7B, and GPT-J to validate these findings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Exploring the origin of linear representation is an important question in LM interpretability. This work identifies a correlation between linear representations of factual recall relations and the subj-obj co-occurrence frequency in pretraining.\", \"This paper investigates the relationship between few-shot accuracy and the existence of a linear representation.\", \"Using the existence of linear representations to predict the frequency of terms in the pretraining corpus is interesting.\"], \"weaknesses\": [\"The scope of the work is somewhat limited, as only 25 factual relations are investigated. It is unclear whether the identified correlation is also valid for other relation types. Expanding the analysis to include more factual relations and other types of relations could further enhance the robustness of the findings and potentially offer additional insights.\", \"The linear representation seems to be affected by the context in LREs (e.g., four \\\"X plays the Y\\\" examples before the fifth one. Are the findings universally applicable to LLM generation without involving ICL formats?\"], \"questions\": \"Please see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Be more specific on the changes made to the paper\", \"comment\": \"I appreciate the effort the authors made to revise the submission based on the feedback. But it would be helpful if the authors can be more specific about the changes made, say by highlighting the changes.\\n\\nAlso, can you modify the paper to reflect our discussion: elaborate on the setting of Table 1 (what does Train OLMo Train GPT-J mean) and Figure 2 (see my previous response)?\"}",
"{\"comment\": \"We are glad the reviewer appreciates the changes we've made so far. Thank you for the continued feedback. We will address the question on the difference between the correlation and regression results, whether we should make a stronger claim on the connection to accuracy, and then move on to the remaining cosmetic questions.\\n\\n> Thanks for the explanation. I agree that \\\"even this simple correlation relationship has not been previously shown in previous work\\\", and I think this main finding of the paper is very interesting, as I said in the strengths. The point I am trying to understand here is, high-correlation seems to imply that you can \\\"predict an output variable based on input variables\\\". Consider a case where two variables are nearly perfectly correlated, isn't it obvious that one can fit a linear model that takes one as input and another as output?\\n\\nAs we understand it, the reviewer is asking whether the results in Figure 2, that linear representations and pretraining frequency correlate very strongly entails the results that we can predict term frequency from LRE measurements. A very important distinction between the two sections is that we move from testing whether average frequency for terms in a relation correlate with an LRE being effective for that relation, to testing whether we can use the LRE's effectiveness on a given datapoint *to predict the frequency of that individual term*. Evidence that these two things are not the same is also present in the paper: subject-object frequency correlates more strongly with the LRE appearing than object frequency (.82 vs. .59 respectively), but predicting object frequency for an individual datapoint is much more effective than trying to predict the subject-object co-occurrence frequency compared to the mean baseline. Even disregarding this fact, the reviewer is entitled to think this isn't an interesting use case, but we think these experiments are necessary to show that there is practical significance to the correlational findings.\\n\\n> incontext learning accuracy on certain tasks has a strong correlation to the pretraining term frequencies. And in this work you show that \\\"the presence/absence of linear representations\\\" is strongly correlated to frequencies as well. Then isn't it already implied that there should be some correlation between the accuracy and \\\"the presence/absence of linear representations\\\"? And the an obvious hypothesis for \\\"however it is still unclear whether a linear representation causes or is necessary for high performance\\\" is the pre-training term frequency is the common cause.\\n\\nWe were hoping to see a clear relationship where LREs form right before/after accuracy jumps, but we couldn't make a strong case for this. Are we overcomplicating this point? Yes, pretraining frequency seems to be the common cause, but we are wondering if the model is only accurate **because** of the presence of the linear structure (i.e., it won't be accurate unless these form). We are definitely receptive to feedback on this point.\\n\\n>it would be helpful if the authors can be more specific about the changes made, say by highlighting the changes.\\n\\nPlease see the general comment\", \"we_will_now_address_the_cosmetic_questions\": \"> why are some points darker than the others? What do the lines (light grey and dark grey lines) mean?\\n\\nThis is now mentioned directly in the caption of figure 2, so the reader no longer has to go back and forth:\\n \\n\\\"Symbols represent different relations. Highlighted relations are shown in darker lines.\\\" As already mentioned in the footnote, these are \\u2018country largest city\\u2019, \\u2018country currency\\u2019, \\u2018company hq\\u2019, \\u2018company CEO\\u2019, and \\u2018star constellation name\\u2019. We chose these because they occupy different ranges of frequencies, to highlight the relationship. \\n\\n> Also, can you modify the paper to reflect our discussion: elaborate on the setting of ... Figure 2 (see my previous response)?\\n\\nWe are currently migrating data between computer clusters and we do not have time to recreate the graphs in Figure 2 before the discussion period ends. Thank you for understanding, but we really like this cosmetic change and will definitely make it happen for the final version! Still we believe all of the data\\n\\n> why do all dots for GPT-J have the same shape while the dots for the 2 left plots do not? \\n\\nEach shape represents a relation. This is to visually help the reader look at the progress across checkpoints for a given relation on average. GPT-J does not have checkpoints so the same shape was used, but in hindsight, we agree we should show these relations! Again, we are migrating data, apologies that we can't make this change immediately.\\n\\n> elaborate on the setting of Table 1 (what does Train OLMo Train GPT-J mean)\\n\\nWe have updated the table to say \\\"Eval on X\\\" instead of \\\"Train on Y\\\" to be a little more descriptive. In the caption of table 1, we added \\\"Eval. on GPT-J means the regression is fit on OLMo and evaluated on GPT-J.\\\"\"}",
"{\"summary\": \"This research paper explores how the frequency of certain words appearing together in the data used to train a language model (LM) affects the LM's ability to learn simple, linear rules for representing facts. The authors found that the more often two words related to a fact appear together in the training data, the more likely the LM is to learn a simple rule to represent that fact. This discovery helps to understand how LMs learn factual information and could be used to figure out what kind of data was used to train secret LMs. The authors also created a tool to help others count how often words appear together in large datasets\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The study finds a strong correlation between the average co-occurrence frequency of subjects and objects within a relation and the quality of linear representations (LREs) formed for that relation. This correlation surpasses the individual correlations with subject frequencies or object frequencies, highlighting the significance of subject-object co-occurrence.\\n\\nThe study focuses uses Linear Relational Embeddings (LREs), which effectively approximate the computations performed by an LLM to predict objects in factual subject-relation-object triplets. This paper builds upon this research by examining how the frequency of subject-object co-occurrences in pretraining data directly impacts the emergence and quality of these LREs\\n\\nThe paper introduces a promising technique for analyzing the pretraining data of closed-source models by leveraging the connection between linearity and frequency.\", \"weaknesses\": \"The paper presents valuable findings, however they should provide some discussion along the following directions:\\n\\n(a) The paper primarily focuses on Linear Relational Embeddings (LREs) as a representative class of linear representations in LLMs. However, LLMs might employ various other forms of linear or non-linear structures to encode information. This focus on LREs could limit the generalizability of the findings to other types of representations. Is there any strong hypothesis to strict to LREs?\\n\\n(b) While the study demonstrates that LRE features can be used to predict the frequencies of individual terms with reasonable accuracy, predicting the frequency of subject-object co-occurrences is challenging. The regression models achieve only marginal improvements over baseline performance in this task. Integrating additional features might be helpful here.\\n\\n(c) The study analyzes a set of 25 factual relations from the Relations dataset. However, LLMs are trained on vast and diverse data, encompassing a much wider range of relations and concepts. Expanding the scope of analysis to encompass a broader range of relations would provide a more comprehensive understanding of the role of frequency in shaping LLM representations.\\n\\n(d) The paper focuses primarily on the frequency of terms in the pretraining data. However, other factors, such as the context in which terms appear, the syntactic structure of sentences, or the semantic relationships between words, could also influence the formation of linear representations. For example, LLMs are proven to not do well is facts are stored in templates , as it tend to remember the template and not the facts. The proposed approach may not be applicable in those scenarios.\", \"questions\": \"Refer the previous section.\\n\\nIt would be good if authors can dedicate a section to discuss the potential impact of confounding factors, such as context, syntax, and semantics. Explain why controlling for these factors is challenging in the current study but emphasize the importance of future work to disentangle their effects from the influence of frequency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your responses, which have addressed my second concern. I would like to maintain my positive assessment of this work.\"}",
"{\"title\": \"Details on Efficiency\", \"comment\": \"> Line 100: The efficiency of the proposed searching tool is not well discussed.\\n\\nWe can expand on this. Our approach is 10-100x faster than reference approaches we tried (sliding window, naive search with np.where). Consider a matrix of size 4096x4096 which represents a batch of 4096 sequences of length 4096. We need to search that batch for any occurrences of any of the 10k entities (subjects and objects) in our dataset, which may be represented by multiple tokens each. A big part of the speedup comes from not having to store intermediate variables as python objects and from searching for many entities in parallel. Because the code runs entirely in C++, we can drop the GIL (Global Interpreter Lock) and parallelize multiple threads instead of processes (which take longer to instantiate, and need their own memory). At the end of the call, a numpy array with the exact indices across thousands of sequences in which the tokens. This would be impossible to do in reasonable time (years) in Python alone, but using Cython, our implementation can be called directly as a Python module in a standard environment, and is totally agnostic to the specific data passed into it. Please let us know what specifically the reviewer considers to be a proper discussion of the efficiency of our approach.\"}",
"{\"title\": \"Comparison to WIMBD efficiency\", \"comment\": \"> It is possible to compare the efficiency between the proposed tool to WIMBD?\\n\\nAt the current point in time, we can not compare these efficiencies directly, but we can look into it. However, consider that WIMBD requires indexing an entire corpus before searching it. Our method searches individual batches for token occurrences and is better suited to track counts across training time. WIMBD is effective for repeated searches across the same corpus, while our method is flexible to work with new data without requiring reindexing (such as adding training data to a mix), so they serve different purposes.\"}",
"{\"title\": \"Summary of changes made during the discussion (and pending cosmetic changes)\", \"comment\": \"Thank you to the reviewers for their work in evaluating this work. We are happy that the reviewers agreed the work was interesting and valuable, as well as important for model interpretability. Additionally, we are glad that reviewers were generally excited about the potential use of our findings in tools for making inferences about the training data of language models (LMs). We will outline the contributions, points that reviewers made, and our responses here:\\n\\n## Contributions\\n1. We identify a correlation between linear representations (in the form of linear relational embeddings (LREs), see [Hernandez et al., 2024](https://arxiv.org/abs/2308.09124)) of factual recall relations and the subject-object co-occurrence frequency in pretraining.\\n\\n2. We introduce a tool for quickly searching and counting token occurrences in training data batches that offers more flexibility than existing tools.\\n\\n3. We leverage the connection between linear representational structure and frequency to show that we can use the presence and 'strength' of an LRE to predict the pretraining frequency of *individual terms* in the pretraining data, allowing us to make inferences about the pretraining data of open-weights models.\\n\\n\\n## Changes\\n\\nThe biggest change we made was **adding more relations**. There were concerns that 25 relations is not enough or that limiting to factual relations was not telling a general enough picture. We want to emphasize that this is not a small dataset, across the 25 relations, we had **over 10,000** unique subjects and objects. We also focus on factual relations following prior work ([Elsahar et al., 2018](https://aclanthology.org/L18-1544/)) which finds that using subject-object co-occurrences is a good proxy for counting factual mentions. Still, there are interesting questions around how the analysis extends to other relations. To answer this, we included 8 additional commonsense relations. These are: fruit_inside_color, fruit_outside_color, object_superclass, substance_phase, task_done_by_person, task_done_by_tool, word_sentiment, work_location. We added these as Appendix F. While we find interesting relationships between frequency and causality that mirror the factual relations, we also raise issues with using subject-object co-occurrences as counts for some of these relations.\", \"we_outline_the_remaining_changes_requested_by_reviewers_below\": [\"We added more discussion around prior theoretical results to the Discussion section (reviewer eSj4)\", \"Included further discussion around non-linear features (reviewer J8rP)\", \"Qualified the claim about whether we could derive model-agnostic frequency thresholds for when linear representations form (multiple places). We agree we do not have enough data to support the strong version of this claim. (reviewer juu2)\", \"Updating Table 1 to be more descriptive of the setup where we test cross-model generalization of the regression model we fit (juu2). We updated the caption to this table as well to reflect this\", \"Updating the caption in Figure 2 to be more descriptive of which features we are discussing\", \"Clarifying which numbers we are comparing when discussing generalization performance of the regression (L482-483)\", \"We received advice for making Figure 2 more readable from reviewer juu2. We believe this is very helpful feedback and will *definitely* make the changes in the final revision, but we won't be able to make these changes before the rebuttal deadline. Note however, that this is purely presentational, and the relevant data is presented in the current draft.\", \"Once again thank you for a very productive round of reviews\"]}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Thank you for the review\", \"comment\": \"Thank you for the review. We\\u2019re glad the reviewer found the work insightful. We hoped to fill a gap in understanding that theoretical work could not address, so we\\u2019re also happy this came across in the paper. To address your question, Ethayarajh et al,. https://aclanthology.org/P19-1315.pdf (and the cited related work within) point to frequency driving structure in static word embeddings, as well as the training objective driving linear representations in LLMs in Park et al., 2024 (https://arxiv.org/abs/2403.03867). We have provided more discussion synthesizing these ideas in our own work in the updated pdf in the discussion section. Thank you for pointing out some related work we missed, as well.\\nWhile our analysis is mostly correlational, however, we show that the same trends hold for two model families (OLMo and GPT-J). If the reviewer has specific feedback on what metrics or experiments would broaden the impact of the paper, we would be happy to consider implementing them, if feasible.\"}"
]
} |
EDJ7cPZk7V | Forgetting Order of Continual Learning: What is Learned First is Forgotten Last | [
"Guy Hacohen",
"Tinne Tuytelaars"
] | Catastrophic forgetting poses a significant challenge in continual learning, where models often forget previous tasks when trained on new data. Our empirical analysis reveals a strong correlation between catastrophic forgetting and the learning speed of examples: examples learned early are rarely forgotten, while those learned later are more susceptible to forgetting. We demonstrate that replay-based continual learning methods can leverage this phenomenon by focusing on mid-learned examples for rehearsal. We introduce Goldilocks, a novel replay buffer sampling method that filters out examples learned too quickly or too slowly, keeping those learned at an intermediate speed. Goldilocks improves existing continual learning algorithms, leading to state-of-the-art performance across several image classification tasks. | [
"continual learning",
"catastrophic forgetting",
"replay buffer"
] | Reject | https://openreview.net/pdf?id=EDJ7cPZk7V | https://openreview.net/forum?id=EDJ7cPZk7V | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zlTKa1jGdi",
"x9XaXHG2Xe",
"pDYPJlz18k",
"idkJ1jHNQP",
"fUNfVHstPS",
"enBnEwvIEP",
"Qb3GDzup0f",
"Nvaq10AvVs",
"Ndv7qGibuh",
"LEOdGY1Zpr",
"Kw7RYj5qQT",
"KDODXlA6CF",
"JP2gQqAKEn",
"HQfog5oTGa",
"AK80nH4mt3",
"9USYYHvsE7",
"7HJOkhr8m8",
"6fVraNS1oh",
"3HcgvhCgxH"
],
"note_type": [
"official_review",
"decision",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment"
],
"note_created": [
1730196394785,
1737523889204,
1733136309070,
1730662653772,
1730123847919,
1732703556055,
1732540703701,
1732535385010,
1730644937284,
1730665325342,
1732745484061,
1732540676808,
1732544621846,
1732536604297,
1732695580609,
1732540808153,
1732536829508,
1735088009382,
1732536044803
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8123/Reviewer_9eZ5"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8123/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8123/Reviewer_pNCs"
],
[
"ICLR.cc/2025/Conference/Submission8123/Reviewer_m8pC"
],
[
"ICLR.cc/2025/Conference/Submission8123/Reviewer_9eZ5"
],
[
"ICLR.cc/2025/Conference/Submission8123/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8123/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8123/Reviewer_NS7R"
],
[
"ICLR.cc/2025/Conference/Submission8123/Reviewer_XrZi"
],
[
"ICLR.cc/2025/Conference/Submission8123/Reviewer_NS7R"
],
[
"ICLR.cc/2025/Conference/Submission8123/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8123/Reviewer_XrZi"
],
[
"ICLR.cc/2025/Conference/Submission8123/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8123/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8123/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8123/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8123/Area_Chair_JZHV"
],
[
"ICLR.cc/2025/Conference/Submission8123/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"In Continual Learning, the methods that have worked best are memory-based. These methods work by sampling a percentage of the training set of each task that is then used in the training of subsequent tasks to \\u2018remember\\u2019 past tasks. In this paper, the authors analyse the best examples to populate the buffer. They analyse the learning speed, showing how it affects performance when sampling from the training set by leaving out the top slower or quickest-to-learn samples. The authors show that items learned quickly are the least forgotten, and conversely, items learned more slowly are the first to be forgotten. With this insight, the authors present a new methodology for populating memory called \\u2018Goldilocks\\u2019. Empirically, the authors show that sampling only from items with an intermediate learning speed can have comparable or better results than current methods for populating memory across different benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors' motivation for presenting the problem is evident in their approach, which aids in understanding the problem and its relevance.\", \"An analysis is presented that helps to understand the method before it is presented. Multiple experiments show the usefulness of eliminating the very fast and slow-to-learn examples, to sample only intermediate ones.\"], \"weaknesses\": [\"Despite the authors' thorough analysis, no explanation or intuition is provided as to why medium learning speed items are the most useful for populating memory. It would be good if the authors provided a rationale beyond the empirical results. This rationale could be based on intuition or other work.\", \"The results shown are limited to a small group of scenarios. The analyses performed are only based on CIFAR10 and CIFAR100 divided into 2 tasks. A better analysis should emphasize a broader set of scenarios and benchmarks to ensure the generalisability of the performance.\", \"Other works have shown that performance can change drastically as the number of tasks increases.\", \"The analysis shown is with the Task-Incremental learning scenario, I recommend considering the class-incremental scenario as it is a more widely accepted scenario. The authors mention that the analysis is in the Appendix, but I did not find corresponding results.\", \"This may affect figures such as Fig2a, where you can see that the forgetting is not as drastic as in class incremental and even a slight increase is seen near epoch 150.\", \"Although the authors show, both in their analysis and with their method, that the results achieved are better than other alternatives, the benefit is only marginal. Often even less than the standard deviation.\", \"During the analysis, the difference is often at most 2%, between all removal combinations slowest/quickest. This shows that the margin of improvement is very slight compared to uniformly populating the memory.\", \"Some arguments and comments in the paper are difficult to extract from the results.\", \"One example is in line 397: 'We find that regardless of the similarity or dissimilarity between subsequent tasks and the original task, the optimal replay buffer composition remains largely independent and consistent'. Nowhere does it show how different or similar the tasks they use are, and they base this only on experiments in CIFAR100.\"], \"questions\": [\"A score called c-score [1] seeks to explain how consistent an example is during training. Can learning speed be related to this score?\", \"The same order of classes is always used, which may affect the conclusions drawn. Is there a reason for this?\", \"Each seed used to run the experiments commonly brings a new class order. This helps to not bias the results to a particular order that may benefit one method over another.\", \"In line 212, the authors mention using an experience replay strategy that alternates batches of data from the new task and the replay buffer. Why use this and not the standard approach of mixing samples from the current task and the buffer in a 50-50 way?\", \"Can the learning rate chosen affect the results and conclusions?\", \"For example, in fine-tuning, it is recommended to use a small learning rate so as not to modify the old weights significantly.\", \"Do the authors have results for different CL methods with different strategies to populate the memory? The methods are usually independent of how the data is sampled, so a complete comparison of how much sampling methods affect different memory-based methods can be done.\", \"I understand using 500 examples for CIFAR10 and CIFAR100, but in TinyImagenet, this means less than 3 elements per class, which can strongly affect the sampling methods used. Do you have experiments with a higher number? It would also be essential to mention the reference to the 'original work' in line 466.\", \"[1] Jiang, Ziheng, et al. \\\"Characterizing Structural Regularities of Labeled Data in Overparameterized Models.\\\" International Conference on Machine Learning. PMLR, 2021.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thank you very much for your follow-up comments and for raising your score based on our discussion. We greatly appreciate the time and thought you have put into reviewing our work.\\n\\nWe wanted to kindly note that while you agreed to update your score from weak reject to weak accept, the official review in the system is still marked as 5 (\\\"marginally below the acceptance threshold\\\"). Could you please update the score in the system to reflect your revised assessment of 6? This adjustment would ensure that our discussion is accurately represented.\\n\\n----\", \"regarding_your_additional_comments\": \"**Example characteristics:**\\n\\nWe agree that characterizing which examples fall into the too-easy and too-hard categories is an intriguing and important question. Prior work on simplicity bias (e.g., [1], [2], [3]) provides various perspectives on which examples are learned more quickly, with factors such as frequency patterns ([1]), image characteristics in specific contexts ([2]), or the rank of required solution ([3]) influencing learning speed. However, the correlation between learning speed and forgetting is not perfect, suggesting that the characteristics driving forgetting may differ slightly. Due to the nature of continual learning, we also suspect that these characteristics are even more context-dependent than those of simplicity bias. While we think this area is worth deeper exploration, we believe it falls outside the scope of the current work and would make for an excellent subject for future research.\\n\\n**Title:**\\n\\nThank you for clarifying your comment. We will consider revising the title in future iterations to better reflect the broader contributions of the work.\\n\\n\\n----\\n\\n[1] Rahaman, Nasim, et al. \\\"On the spectral bias of neural networks.\\\" International conference on machine learning. PMLR, 2019.\\n\\n[2] Pliushch, Iuliia, et al. \\\"When deep classifiers agree: Analyzing correlations between learning order and image statistics.\\\" European conference on computer vision. Cham: Springer Nature Switzerland, 2022.\\n\\n[3] Huh, Minyoung, et al. \\\"The low-rank simplicity bias in deep networks.\\\" arXiv preprint arXiv:2103.10427 (2021).\"}",
"{\"summary\": \"In this paper, the authors present an empirical study that reveals a strong correlation between catastrophic forgetting and the learning speed of examples. They found that the examples that are learned early in the continual learning process are rarely forgotten, while those learned later are more susceptible to forgetting. Leveraging this finding, they introduced a new replay buffer sampling method - Goldilocks that filters out examples learned too quickly or too slowly, keeping those learned at an intermediate speed. On several low to mid-complexity image classification tasks, they showed the efficacy of their proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Strength:\", \"The analysis of learning speed and catastrophic forgetting in continual learning is new.\", \"The authors presented the idea clearly.\", \"Illustrations and figures - especially the binary classification matrix plots are very useful in understanding the concept of the paper.\"], \"weaknesses\": [\"Weaknesses:\", \"The observed correlation between example learning speed and catastrophic forgetting is empirical, with no theoretical analysis provided, hence of limited significance.\", \"Empirical analysis provided to establish the correlation is not sufficient. For example, learning dynamics depend on various factors such as learning rate, network architecture, optimizer, regularization etc. One of the major issues with the current paper is that it does not explore these dimensions to establish the correlation between example learning speed and catastrophic forgetting.\", \"How learning rate for different tasks (initial tasks and later tasks) impact the correlation? If we use a smaller learning rate for later tasks how do forgetting dynamics change? A detailed study is missing here.\", \"How does the correlation change if plain SGD, Adam, Ada-Grad, etc. optimizers are used?\", \"The paper only explores ResNet and its smaller variants for the analysis. For other architectures such as transformers, VGG net, etc do the same conclusions stand?\", \"Gridlock is evaluated on low-to-mid complexity image classification tasks only. Detailed analysis on higher complexity classification tasks on ImageNet is missing.\", \"As stated in the limitation section, the method does not apply to online CL settings and is only limited to classification tasks.\"], \"questions\": \"See the Weakness section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors claim & show that examples that are learned first (simple examples), are in general not forgotten, while examples that are the hardest are forgotten quickly. They propose a replay sampling method that attempts to counter-balance this phenomenon by replaying only samples that are of medium difficulty.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The authors make an interesting observation that could have strong impact in understanding the learning process of neural networks and improving the replay-based continual learning methods.\", \"Strong evidence is brought on CIFAR100, using different tools (Figure 2), among which training of multiple networks and consistent observation across these networks that learning speed is strongly correlated with forgetting rate.\", \"They obtain consistent improvements when applying their sampling method on top of existing methods, and across datasets (CIFAR 100 , CIFAR 10 and TinyImagenet)\", \"The results are clearly presented using several demonstration tools and the designed method is simple, the ablation of the number of quickly learned samples and slowly learned samples is comprehensive and easy to read.\"], \"weaknesses\": [\"**W1** Maybe a bit more attention could be given to the engineering of class-incremental learning results to make them comparable to the sota one. Right now they are only given on CIFAR100-2 with buffer size of 500. Would be interesting to have them on CIFAR100-10 with bs of 1k or 2k for instance, and maybe applying some anti task-recency bias method or simply probing the representations to show whether the probed representation from the model using the new sampling method is better.\"], \"questions\": [\"**Q1** It is good that results for both CIL and TIL are presented, but for the CIL, they are way less furnished. Would it be possible to have the same than Figure 2 for the CIL in the appendix ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I thank the authors for their detailed answers and modifications to the paper.\\n\\n**Intuition**\\n\\nAlthough I agree with the small explanation, it would be great to know (or have an intuition) about what makes an example slow (or quick) to learn. For example, one could suggest that because it is a slow-to-learn example, it is necessary to add it to the memory so the model doesn't have to re-learn it after a long process. \\nWhat do you mean by \\\"provide limited additional value\\\"?\\nSome results suggest something else, but it is not entirely intuitive.\", \"for_example\": \"- Are slow-to-learn examples outside of the class distribution (as some previous work suggests)?\\n- Are quick-to-learn examples more in distribution? \\n\\n**Datasets**\\n\\nI appreciate the authors' completion of the results on those datasets; however, I still believe that different benchmarks could provide a clearer picture of how this method behaves, giving greater validity to the results shown in the current work.\\n\\n**Marginal benefits and statistical significance**\\n\\nI understand the consistent improvement over uniform sampling shown in Figure 8. However, even with fixing the bug that measures the Standard error, the difference between other replay buffer sampling methods is not significant in most cases. Compared with the second-best, Goldilock achieved less than a 1% improvement. This may not be sufficient, considering the need to find the percentage of slow and fast elements to eliminate, as suggested by another reviewer, and the difference in performance that can be seen in the figures shown in the paper.\\n\\n**Continual learning without uniform sampling**\\n\\nI don't entirely agree that \\\"non-uniform samplings are often highly specialized.\\\" The main reason people tend to prefer uniform distribution is the low-performance increase of \\\"non-uniform samplings, \\\" which, in my opinion, is also happening in the proposed method.\\n\\n\\nIn summary, I agree that the proposed method have potential to contribute to the CL community. However, the paper needs an essential explanation for why removing slow and fast examples helps achieve slightly better performance than other sampling methods. This extra analysis can help the work make a real contribution, not only to the CL area. In addition, understanding the rationale for which elements need to be removed could help eliminate the dependency on both hyper-parameters. \\nBecause of this, I am raising my score but I am still inclined to reject the paper.\"}",
"{\"title\": \"continued response\", \"comment\": \"**Learning rate:**\\n\\nWe conducted additional experiments, repeating the analysis in Fig 4 with:\\n* A different learning rate from the beginning.\\n* A smaller learning rate for subsequent tasks, as you suggested.\\nThe results show that the qualitative trends remain consistent in both scenarios, see Appendix C and Fig 23.\\n\\n**Continual learning without uniform sampling:**\", \"our_analysis_primarily_focused_on_methods_employing_uniform_buffer_sampling_for_two_reasons\": \"* Many state-of-the-art (SOTA) methods rely on uniform sampling, allowing for broader and more direct comparisons.\\n* Methods designed for non-uniform sampling are often highly specialized and may fail under alternative sampling strategies like ours or random sampling. Evidence for this is shown in Table 1, where sampling strategies from other methods often underperform compared to random sampling across different CL scenarios.\\n\\n**Tiny ImageNet buffer size:**\\n\\nWe acknowledge that using a small buffer for Tiny ImageNet could disproportionately affect certain sampling methods. However, we also evaluated Tiny ImageNet with a significantly larger buffer of 10k examples in Figure 4c. This provides a broader perspective on how buffer size impacts the results and ensures our conclusions are not limited to small buffer scenarios.\"}",
"{\"title\": \"General comment for the reviewers\", \"comment\": \"We thank all the reviewers for the constructive reviews and the time and effort they took in reviewing our paper. The diverse range of scores (10, 8, 5, 3, 3) highlights polarized views on our paper. We believe this polarization stems from its unorthodox focus on behavioral analysis rather than conventional methods or theory, and we would like to provide additional context for our approach.\\n\\nIn this paper, we take an observational approach, observing a novel connection between simplicity bias and catastrophic forgetting. While simplicity bias -- where neural networks learn simpler examples before complex ones -- is well-known, we find that in catastrophic forgetting there is a \\\"reverse simplicity bias,\\\" where complex examples are forgotten before simpler ones.\", \"our_contributions_are_threefold\": \"* Observing and measuring: We uncover this reverse simplicity bias and define tools to quantify it.\\n\\n* Exploration: We systematically test the phenomenon across diverse scenarios, identifying factors that influence it.\\n\\n* Practical application: We demonstrate its utility in continual learning by introducing a sampling strategy (Goldilocks) that leverages this insight, improving multiple methods across varied settings.\\n\\nThis type of behavioral investigation, which is focused on observing a phenomenon rather than suggesting a new method or conducting a theoretical study, is inspired by approaches common in neuroscience and psychology, where systems too complex for full theoretical analysis are studied empirically to gain insights from observed behavior. Similarly, as neural networks remain challenging to analyze fully mathematically, we argue that behavioral studies can offer valuable understanding.\\n\\nWe encourage reviewers to evaluate the paper from this perspective. Beyond the sampling method, we ask that you consider whether the observed connection between simplicity bias and catastrophic forgetting is both novel and significant, whether it enhances our understanding of catastrophic forgetting, and whether it has the potential to inspire future research.\\n\\nWe hope this clarification helps reviewers see the broader value of our work beyond the introduction of a new sampling method, and reconsider its contribution to the field.\"}",
"{\"summary\": \"The paper explores a strategy for selecting examples to include in a replay buffer for continual learning. The main idea is to exclude two sets of examples: those that are learned too easily and those that are difficult to learn, with the aim of improving generalization across a sequence of classification tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors take this fairly simple idea and run a series of tests. These experiments cover a range of datasets and settings for the size of the buffer of replayed examples. They also explore two different task orderings and show that the results are consistent across them. Most of the experiments focus on a sequence consisting of just a pair of tasks, but there are some results with a more extensive set of tasks. The experimentation and reporting of results is clear and fairly complete, especially with the standard error discussion and class incremental results presented in the Appendix.\", \"weaknesses\": \"The chief weakness is a lack of significance. The paper is mostly an exploration of whether a type of simplicity bias can be used to guide the selection of examples in the replay buffer. It does not advance a substantive new method or analysis, but seems like a straightforward application of existing ideas. The results show a consistent but not whopping win for this approach,\\n\\nA second weakness is a lack of analysis of the types of examples that fit into the too-easy and too-hard categories. Showing that the examples that are learned earlier are forgotten less and those that are learned later are forgotten more is not surprising, as it fits well with various studies such as the simplicity work (as acknowledged by the authors).\\n\\nAs well there is quite a bit of variation across the datasets and experimental conditions, such as buffer size, in terms of the relative performance of different percentages of the too-small and too-fast sets that should be excluded. There is no analysis of this, which begs the question of how to set these hyperparameters in a new setting.\", \"questions\": \"I'd recommend that the authors make the method more practically applicable by showing how it can be deployed in a few new settings (e.g., combination of dataset and replay buffer size). One way to address this would be to demonstrate that a small amount of data and experimentation can be used to determine a set of hyperparameters that exhibit strong performance.\\n\\nOne minor question concerns the title, which doesn't quite fit the primary message of the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper analyzes the forgetting discrepancies among different examples and provides a theory that the examples that are learned the first and last are the least prone to forgetting. The paper also proposes a practical algorithm for sample selection for the replay buffer where it removes the examples that are learned first or last.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The paper demonstrates simplicity bias in neural networks.\", \"The paper proposes an effective replay buffer sample selection algorithm that outperforms uniform in many cases and also other subsampling algorithms in some cases.\"], \"weaknesses\": [\"Completeness: Table-1 should also include CIFAR-100-5, CIFAR-100-20 and Tiny-ImageNet.\", \"Limitation: The conclusion may depend on the training time on each task. For example, if the number of epochs is small, then the hardest to learn examples have not been learned, then it may also need to stay in the replay buffer. The paper has also acknowledged that the method may not be suitable for stream learning in its limitation section. However, it would be better if the paper can give guidance on the number of epochs required for the proposed method to work well.\", \"Hyperparameters: The algorithm may rely on selecting hyperparameters (e.g. s and q) for removing the slowest and fastest examples. And it might be unclear how that parameter varies across different datasets. If choosing a hyperparameter repetitive experiments, then it may defeat the premise of continual learning.\"], \"questions\": [\"I wonder if the authors can provide experiments on other datasets, and show how hyperparameters will vary across different datasets.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I appreciate the detailed responses to my review as well as the others. These have addressed many of the issues. I also like the more detailed discussion of findings on class-incremental learning.\\n\\nHowever some of the responses to my review are not adequate. The paper and responses do not contain sufficient analysis of the types of examples that fit into the too-easy and too-hard categories. This is especially important in what you are calling an observational paper -- are there some characteristics of examples that are readily forgotten beyond their learning speed? This could also add some insight into why having relatively fewer of them in the replay buffer may be beneficial. Another reviewer also brought this point up, but I did not see any response to it.\\n\\nAlso my point about the title was not that Goldilocks should be in the title but rather that it does not capture the main contribution of the paper. The title focuses on the forgetting order, which is the first of the three contributions highlighted by the authors, and does not address other contributions, such as how this forgetting order should be taken into account in replay to ameliorate forgetting.\\n\\nNonetheless I will raise my score one point to push it from weak reject to weak accept.\"}",
"{\"title\": \"Response for reviewer 9eZ5\", \"comment\": \"Thank you for your elaborate review. Below, we address each of your points separately.\\n\\n**Intuition**\\n\\nSimplicity bias research suggests that neural networks learn examples in order of increasing complexity. At the end of a task, the network performs well on examples up to a certain complexity level. For replay buffers, this implies that:\\n\\n* Excluding slowly learned (high-complexity) examples makes sense because the network struggled to learn them initially and is unlikely to benefit from replaying them.\\n* Excluding quickly learned (low-complexity) examples is also reasonable because these examples provide limited additional value, given the network\\u2019s ability to handle more complex tasks.\\n\\nWe added a discussion of this point to Section 3.2.\\n\\n**Other datasets**\\n\\nWhile Figs. 2 and 3 focus on 2-task scenarios in CIFAR-10 and CIFAR-100, the paper also includes results from broader settings. For instance:\\n* Fig. 4c: Tiny-ImageNet with 2 tasks\\n* Fig. 7: CIFAR-10 with 5 tasks and CIFAR-100 with 20 tasks\\n* Fig. 8: CIFAR-100 with 5 tasks and Tiny-ImageNet with 10 tasks\\nAlthough these figures do not explicitly quantify the correlation between learning speed and catastrophic forgetting, the success of Goldilocks in these scenarios implicitly relies on this relationship.\\n\\nTo address your suggestion directly, we have added new results in Appendix B (Fig. 19) and Section 2, validating the correlation from Fig. 2c across diverse datasets and task configurations, including CIFAR-100 with 20 tasks, CIFAR-10 with 5 tasks, and Tiny-ImageNet with 2 tasks finding a stronger correlation.\\n\\n**Task incremental learning**\\n\\nBoth class-incremental and task-incremental learning are important and widely accepted paradigms in the community. Our choice to focus on task-incremental learning in the main paper was deliberate, as we believe it provides a \\\"cleaner\\\" view of catastrophic forgetting.\\n\\nThat said, we recognize the value of evaluating both scenarios. To address this, we conducted a parallel analysis in the class-incremental setting and included these results in Appendix A of the original manuscript. In the revised manuscript, we expanded the analysis, repeating all Figs 2, 3, 4, 6, 7, 8 with CIL setting, in Figs. 9-14 respectively (App A). A short discussion was added to Sec 2.2, noting that the CIL results mirror those of TIL.\\n\\n**Marginal benefits and statistical significance**\\nWe discovered a bug in the code used to generate Fig 24, which contained the standard errors for Fig 4. This was fixed in the revision, showing that all the observed results are statistically significant.\\n\\nOther than the standard errors themselves, the consistent results across a large range of hyperparameters and experiments also indicate statistical significance. Large standard errors would have resulted in highly erratic behavior, particularly in experiments with closely related hyperparameters. Instead, our findings demonstrate smooth and systematic improvements.\\n\\nWe note that a 2\\\\% improvement in final accuracy is non-trivial in the continual learning domain, and many works report much smaller improvements. Moreover, for scenarios with multiple tasks (Fig 8), the gains are more pronounced, reaching up to 5\\\\%.\\n\\n**Extracting arguments from the results**\\n\\nWhile section 3.3 in the main paper focuses on subclasses of CIFAR-100, we address broader task dissimilarities in App E. We include scenarios where the second task is:\\n* Rotated version of the first task, with a different objective\\n* Random labels\\nBoth setups introduce dissimilarity between tasks (see Fig 28 in App E). We have rephrased the relevant text in Section 3.3 to clarify this point.\\n\\n**c-score:**\\n\\nThank you for bringing this reference to our attention. The c-score measures the expected accuracy of individual examples and, like other accuracy-based metrics, shows a strong correlation with learning speed. In Appendix G, we discuss similar accuracy-based scores and their relationship to catastrophic forgetting. Among all these scores, learning speed has the strongest correlation to catastrophic forgetting, and is the cheapest to compute, making it well-suited for our analysis. We added a short discussion and a direct comparison of the c-score to App G and Fig 29d.\\n\\n**Class order:**\\n\\nThe specific split of classes into tasks did not affect our results. We added to the revised manuscript results with another data-split, in App c, Fig 21c. The results are similar to original ones in Fig.4.\\n\\n**Experience replay strategy:**\\n\\nIn our experiments, we observed no significant difference between alternating batches and mixing samples from the current task and the replay buffer in a 50-50 ratio. The choice to alternate batches was made purely for code convenience. This point is noted in the revised manuscript for transparency.\"}",
"{\"comment\": [\"I thank the authors for their response.\", \"I still believe presenting a full suite of results on all datasets in Table 1 is necessary and the computational demands of baseline shouldn't be a top concern.\", \"I appreciate the additional results in the Appendix. It does seem that the optimal values of s and q shifts over the number of training epochs.\", \"It is true that some continual learning methods also rely on hyperparameters, but I was comparing it to random uniform sampling. Also, I don't think Figure 4-6 gives adequate guidance on proper selection of s and q. It seems like the optimal values are also very dependent on the dataset and the buffer size. The authors gave some qualitative comments on \\\"dataset characteristics\\\" but I don't think they are backed by empirical results.\"]}",
"{\"title\": \"Response for reviewer pNCs\", \"comment\": \"Thank you very much for your review! Below, we address separately each point in your review:\\n\\n**Theory**\\n\\nAs noted in the general comment, this paper takes an empirical approach, focusing on behavioral observations to reveal novel insights into neural networks. While we acknowledge that theoretical analysis can strengthen the findings, we believe that empirical evidence also plays a crucial role in advancing understanding, especially in areas like continual learning where theoretical tools may be limited.\\n\\n**Correlation under different learning hyper-parameters**\\n\\nRegarding your comments about the empirical analysis, we incorporated your suggestions for additional experiments into the revised manuscript, which we believe improved its quality.\\n\\nThroughout our empirical study, we explored various datasets and settings, selecting hyperparameters such as optimizers, architectures, and regularization heuristically. Given the extensive hyperparameter space in deep learning, exhaustively testing every result against all possible configurations is infeasible for a single paper. However, we incorporated your suggestions and extended one of our main results (specifically Figure 4) to include the proposed hyperparameter variations. Notably, we found that these variations do not alter the reported correlation: the relationship between learning speed and catastrophic forgetting remains consistent. Moreover, buffer compositions effective in one setting often generalize well to others, further supporting the robustness of our findings.\\n\\nThe new experiments are detailed in Appendix C and referenced in Section 3.2. Specifically, we added:\\n* Comparisons across different learning rates, including lowering the learning rates for subsequent tasks. These results can be found in Figure 23\\n* Comparisons across different optimizers, including Adam, SGD, and Adagrad, as suggested. These results can be found in Figure 20.\\n* Results on a non-ResNet-based architecture, specifically VGG-16, in Figure 21a.\\n* Comparisons between training with and without regularization. These results can be found in Figure 21b.\\n\\nRegarding dataset complexity, our results already include experiments on Tiny ImageNet (Figures 4 and 8), which is commonly considered as challenging as ImageNet. Nevertheless, we plan to include experiments on ImageNet subsets in the camera-ready version, as they take too long to include in the limited time of the rebuttal.\\n\\n-----\\n\\nThank you again for your constructive feedback, which helped us improve our manuscript. We hope our responses have sufficiently addressed your concerns, and we believe the revisions strengthen the paper. Given the mixed reviews, every point is crucial, and we hope you will consider our clarifications in your final evaluation.\"}",
"{\"comment\": \"Thank you for the swift response!\\n\\n**Updated Table 1:**\\n\\nWe fully agree that presenting Table 1 with all the datasets is crucial for the completeness of the work. Based on your suggestion, we have now included in the revised manuscript Table 1 results for all the datasets you suggested, including CIFAR-100-20, CIFAR-10-5, and TinyImageNet. These results confirm that while other sampling methods occasionally succeed or fail depending on the dataset and buffer size, Goldilocks consistently performs well across all tested scenarios. We hope these additions address your concerns about the completeness of our evaluation.\\n\\n**Hyperparameter Selection and Practical Guidance:**\\n\\nWe appreciate your detailed comments on hyperparameter selection. While our heatmaps in Figures 4-6 were intended to provide an extensive overview of the relationship between learning speed and forgetting, and not to guide the selection of $q$ and $s$, we understand that this broader analysis may give the impression of complexity. While the maximal value in each heatmap may vary across experiments, in practice, Goldilocks does not require the *optimal* hyperparameters to outperform uniform sampling. Across all datasets, a very wide range of $q$ and $s$ values leads to improved performance compared to random sampling, and using any of these hyper-parameters will result in a good performance. For example, in the \\\"training epochs\\\" experiments you referred to in Figure 22, any s value between 4\\\\% and 60\\\\% achieves better performance than random sampling, with corresponding q values spanning a broad range, sometimes up to 36 different percentage points. Notably, while the *optimal* values across the experiments in Fig. 22 differ, the range of good hyper-parameters remains very similar.\\n\\nTo give guidance on how to choose hyper-parameters in practice, there is a paragraph at the end of section 3.3. The suggested method is ultimately heuristic, as the range of good hyper-parameters is so wide, that simple heuristics often get satisfactory results off the shelf, without any additional computing. However, for practitioners with additional resources, we also describe a principled tuning method at the end of Section 3.3, which requires no additional data and performs well empirically. This balance between practicality and flexibility makes Goldilocks applicable while staying true to the constraints of continual learning.\\n\\n\\n**Goldilocks as a Proof of Concept:**\\n\\nIt is important to emphasize that Goldilocks is primarily a proof of concept demonstrating that continual learning methods can be improved by accounting for which examples are prone to forgetting \\u2014 which can be predicted before training on a new task. While Goldilocks achieves strong empirical results and can be applied in practice, we envision that future methods could leverage these underlying principles to achieve even greater performance with more sophisticated approaches.\\n\\nWe hope these clarifications and updates address your concerns and help contextualize the contributions and practicality of Goldilocks. Thank you again for your comments, which helped improve the manuscript.\"}",
"{\"title\": \"Response for Reviewer m8pC\", \"comment\": [\"Thank you for your review and positive feedback! We appreciate the recognition of the strengths in our work and the valuable suggestions for improvement. Below, we address your comments and questions:\", \"**W1 + Q1:**\", \"To address this suggestion, we have significantly expanded the section on CIL in the revised manuscript. Specifically, we now include in Appendix A a repetition of all major figures from the main paper under CIL settings. These include:\", \"Figure 9 (repeating Figure 2)\", \"Figure 10 (repeating Figure 3)\", \"Figure 11 (repeating Figure 4)\", \"Figure 12 (repeating Figure 6)\", \"Figure 13 (repeating Figure 7)\", \"Figure 14 (repeating Figure 8)\", \"This expanded analysis shows that, while CIL naturally leads to lower accuracy compared to TIL (as it is inherently more challenging), the qualitative trends and conclusions remain consistent. The results reaffirm that the insights and analysis in the TIL case also hold for CIL.\"]}",
"{\"title\": \"Response to reviewer NS7R\", \"comment\": \"Thank you for your review. Below, we address the different points you raised separately:\\n\\n**Weaknesses \\u2014 significance and paper's focus**\\n\\nAs noted in the general comment, our work adopts an observational approach, focusing on uncovering behavioral patterns in neural networks rather than introducing a new method or theoretical framework. Specifically, we empirically identify a novel and previously unreported connection between simplicity bias and catastrophic forgetting. While simplicity bias \\u2014 where networks learn simple examples first \\u2014 is well-known, we demonstrate that catastrophic forgetting exhibits a \\\"reverse simplicity bias,\\\" where complex examples are forgotten before simpler ones.\\n\\nThe perceived lack of \\\"surprise\\\" may stem from the robustness of simplicity bias, which intuitively suggests its relevance to forgetting. As almost any neural network learns simple things first, it is intuitive that an equally robust pattern will occur when such a network forgets. We believe that this robustness enhances the significance of our findings: if forgetting patterns mirror simplicity bias, this suggests a foundational phenomenon that can guide future work in continual learning, leveraging established insights and tools from simplicity bias research. While the \\\"surprise\\\" of a result is subjective, we argue that our contribution lies not in the novelty of simplicity bias itself but in its new application and implications for continual learning.\\n\\nAs for Goldilocks, it serves as a proof of concept to demonstrate how accounting for this forgetting pattern can improve continual learning. Although we do not agree that the improvement is not dramatic (2-4\\\\% of consistent accuracy gain across different scenarios is not easy to achieve), Goldilocks is mainly designed to show that even a relatively simple sampling function based on the connection between simplicity bias and continual learning can improve a large array of continual learning methods, demonstrating the potential for future works to take into account this connection when devising new methods. \\n\\nFinally, the title reflects the primary focus of the paper: uncovering and understanding the connection between forgetting and catastrophic forgetting. As Goldilocks serves as a practical example, we do not think it should be featured in the title.\\n\\n**Setting hyperparameters**\\n\\nThe wide range of hyperparameters in our experiments aimed to provide a comprehensive view of how buffer compositions impact forgetting. For practical deployment, we added guidance in Section 3.3 of the revised manuscript to simplify hyperparameter selection for new settings.\", \"in_summary\": [\"A heuristic approach is often sufficient. For instance, setting $q=s=20\\\\%$ performed well across all datasets and scenarios tested.\", \"Adjustments can be made based on prior knowledge of the dataset's complexity:\", \"For complex datasets, favor excluding more complex examples $(s>q)$.\", \"For simpler datasets, favor excluding simpler examples $(q>s)$.\", \"For scenarios where computational resources allow, a non-heuristic approach based on auxiliary tasks can further optimize hyperparameters, even with limited data.\", \"We hope these additions address your concerns about practical applicability and encourage you to view this work as a foundation for further exploration of this phenomenon.\"]}",
"{\"metareview\": \"This paper turned out to be a tricky one for me because of the variation in the scores given by different reviewers. While reviewers pNCs, NS7R, and 9eZ5 do not seem to be supporting the acceptance of the paper, reviewers XrZi and m8pC have given very high scores.\\n\\nI personally read the paper carefully and discussed with the reviewers to ensure fair assessment. Unfortunately, after discussions and reading the reviews carefully, I am recommending rejection of this work in its current form (few reasons mentioned below).\\n\\n\\n---\\nThe authors investigate the impact of sampling the so-called mid-learned examples on continual learning (CL), they call the underlying sampling method Goldilocks.\\n\\nSampling a small subset of samples to store as episodes is important in reducing forgetting in CL and is an important aspect to carefully look at in any storage/compute constrained CL formulation. However, I also believe that such proposals when based purely on intuitions and empirical observations demand thorough investigation.\\n\\nReasons why I think the paper should be rejected (most of them have already been mentioned by the reviewers as well)\\n\\n- The main intuition that different examples are easy/hard to learn and a models' performance on them vary during training is not new (e.g., [1]) and has been the basis of several works including say the renowned focal loss paper. Therefore, I see little novelty when it comes to providing new intuitions. Having said that, using this to propose an effective sampling strategy for CL does have merit and I appreciate the reviewers for connecting the dots here.\\n\\n- However, I believe that to ensure that the intuition really works well empirically, the set of experiments provided seem to be very limited in scope. They do provide some promising results but aren't enough to accept the claims made in the paper. For example, I would have preferred seeing experiments in (1) both online and offline settings (both settings have formulations that rely on episodic memory); (2) task incremental and class incremental with several classes/task and at least 15-20 tasks to see the true effect of CL; (3) large scale set-ups (imagenet, iNaturalist at the very least); (4) set-ups where a pre-trained backbones are used (e.g., continual fine-tuning of CLIP, or RanPAC type set-ups), _perhaps in this one there isn't much difference between fast and slow learned samples due to rich representations we already obtain from pretrained models_? I think investigating all these aspects would be crucial to provide the right intuition on where one should expect Goldilocks to work. Since the paper is primarily empirical, the study must be done exhaustively across a variety of experiments to justify the arguments made.\\n\\n- Perhaps, examples learned slowly are more important to keep in the episodic buffer and the ones learned fast are being learned fast due to spurious correlation (Fig 1, the Bees examples are all with yellow background). A similar comment was also made by Reviewer 9eZ5 as well. Discussion along these lines would be highly valuable for the paper.\\n\\n- The improvements also seem to be marginal given that an extra forward pass is needed every epoch to compute the importance score. I also find the claim in Fig 5 to be too bold given its empirically tested only on two setting A-B and A-C.\\n\\nI would like to mention that while going through the reviews I realised that the **authors significantly worked during the rebuttal period**, added new experiments etc., and I truly appreciate their effort towards this. I\\\"m sure these efforts will eventually contribute towards making the paper much stronger.\\n\\n[1] An Empirical Study of Example Forgetting during Deep Neural Network Learning, Toneva et.al., 2019 [2] RanPAC: Random Projections and Pre-trained Models for Continual Learning [3] Fine-tuning can cripple your foundation model; preserving features may be the solution\", \"additional_comments_on_reviewer_discussion\": [\"The concerns raised by the reviewers were mainly regarding (1) lack of proper experiments (small datasets, use of old architectures); (2) lack of novelty behind the main idea (and lack of theoretical justifications); (3) lack of proper analysis (dependence on learning rate, architecture etc.); (4) restricted scenarios (e.g, did not perform experiments on both online and offline settings, task and class-incremental) etc.\", \"Reviewers greatly appreciated the active participation of the authors during the rebuttal. Authors did provide several new results such as (CIFAR-100-20, CIFAR-10-5, and TinyImageNet), ablation (effect of different optimizer, learning rate etc.) and arguments that were compelling.\", \"However, there was no unanimous agreement towards the acceptance of this work primarily because of the reasons mentioned above. Since the work is empirical in nature, the experiments must be rigorous.\"]}",
"{\"title\": \"Response for Reviewer XrZi\", \"comment\": \"Thank you very much for your review!\", \"addressing_the_weaknesses_and_questions_you_raised\": \"**Completeness:**\\n\\nWe agree that extending Table 1 to include larger datasets such as CIFAR-100-5, CIFAR-100-20, and Tiny-ImageNet would strengthen the manuscript and enhance its completeness. However, we were unable to conduct these additional experiments during the rebuttal period due to the computational demands of certain baselines (e.g., GSS and IPM) and the modifications required to adapt their code for multi-task settings. We will incorporate these datasets into Table 1 for the camera-ready version.\\n\\n**Limitation:**\\n\\nTraining Goldilocks with a limited number of training epochs presents two potential challenges. First, because the *learning speed* (Eq. (1)) depends on the number of epochs, its accuracy diminishes as the number of epochs decreases. Second, as you noted, if the networks train for too few epochs, they may not converge, leading to instances where examples are incorrectly classified as \\\"slowly learned\\\" due to insufficient training.\\n\\nIn the revised manuscript, we address these concerns by presenting additional experiments (Appendix C + Figure 22). These experiments show that the same buffer compositions remain effective even with significantly reduced training iterations. Although there is some noise introduced by the less accurate *learning speed* in these cases, the qualitative results remain consistent. On the other hand, for scenarios like streaming data, where the network may not converge adequately, the *learning speed* simply can not capture well enough how fast the model learns certain examples, and Goldilocks requires some different complexity measurement to work well (which is beyond the scope of our work).\\n\\nAs recommended, we have incorporated these findings into the limitations section (Section 4), and we discuss the results from Figure 22 in Appendix C and Section 3.2 to provide practical guidance on the appropriate number of epochs needed for the method to perform optimally.\\n\\n\\n**Hyperparameters:**\\n\\nFigs. 4-6 present a wide range of hyperparameters $q$ and $s$ to demonstrate their impact comprehensively. These results indicate that a broad selection of $q$ and $s$ values can effectively enhance learning, highlighting the ease of hyperparameter selection for new datasets. For instance, in all tested settings and datasets, simply setting $s=q=20\\\\%$ significantly improved performance.\", \"these_straightforward_choices_can_be_further_refined_based_on_dataset_characteristics\": \"for challenging datasets, setting $q < s$ may be beneficial, as slowly learned examples are less likely to contribute to learning and can be removed more aggressively. Conversely, for simpler datasets, choosing $s < q$ may be preferable. Importantly, these heuristic adjustments require no additional computation and align with the premise of continual learning.\\n\\nWhile many continual learning studies provide hyperparameters without justification, we opted to include explanations to guide practitioners. For scenarios with additional computational resources, we also propose a systematic approach for hyperparameter tuning, which we have described in Section 3.3. Additionally, we added to the revised manuscript, at the end of section 3.3, a paragraph explaining these points, and guiding how to pick in practice hyper-parameters when encountering new data, which we hope will help guide future readers.\\n\\n**Question:**\\n\\nIn the paper, we present results on multiple datasets, including CIFAR-100-2, CIFAR-100-20, CIFAR-10-2, CIFAR-10-5, TinyImageNet-2, and TinyImageNet-10. Due to the limited time available during the rebuttal period, we were unable to include experiments on datasets significantly different from those already evaluated. However, we added an explanation on how to pick hyper-parameters at the end of Section 3.3. Additionally, we plan to incorporate additional datasets for the camera-ready submission. Moreover, in this revised manuscript, we have included results for an alternative split of CIFAR-100 into two tasks, with classes split into each task randomly. These results are provided in Appendix C, Figure 21c.\"}"
]
} |
ED5w271rWo | Banyan: Improved Representation Learning with Explicit Structure | [
"Mattia Opper",
"Siddharth N"
] | We present Banyan, a model that efficiently learns semantic representations by leveraging an inductive bias towards explicit hierarchical structure. Although typical transformer-based models excel at scale, they struggle in low-resource settings. Recent work on models exploiting explicit structure has shown promise as efficient learners in resource-constrained environments. However, these models have yet to demonstrate truly competitive performance. Banyan bridges this gap, significantly improving upon prior structured models and providing, for the first time, a viable alternative to transformer embeddings for under-represented languages. We achieve these improvements through two key innovations 1) A novel entangled tree structure that resolves multiple constituent structures into a single shared one, explicitly incorporating global context. 2) Diagonalized message passing functions that increase the influence of the inductive bias. Our final model has just 14 non-embedding parameters yet is competitive with baselines many orders of magnitude larger. Banyan outperforms its structured predecessors and competes with large unstructured models across various semantic tasks in multiple languages. Notably, it excels in low-resource settings, highlighting its potential for efficient and interpretable NLP in resource-constrained environments. These results underscore the value of appropriate inductive biases in capturing semantic relationships and open new avenues for efficient, interpretable NLP models. | [
"Representation Learning",
"Structure",
"Semantics",
"Syntax",
"Induction",
"Composition"
] | Reject | https://openreview.net/pdf?id=ED5w271rWo | https://openreview.net/forum?id=ED5w271rWo | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"trWXTvcWpI",
"ruICXIPhEP",
"kdRAzIAHpu",
"dqr8etukdT",
"diY3KWhUFe",
"aUDZX2uXpU",
"UWf8hG94B9",
"SzoVq24mCj",
"NxdPbdctTt",
"NtfQFi1EBS",
"IKyGgxNiRx",
"HBJqQAwRVI",
"7uH3NwAidW",
"5uVbHCB5Hf"
],
"note_type": [
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1729928856086,
1731973083906,
1729621937591,
1731972279119,
1730758374760,
1732565023012,
1732116259266,
1731972495202,
1737524111496,
1734903405522,
1731972461561,
1730698624530,
1732240793527,
1732564983270
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11216/Reviewer_gpX7"
],
[
"ICLR.cc/2025/Conference/Submission11216/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11216/Reviewer_ryjZ"
],
[
"ICLR.cc/2025/Conference/Submission11216/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11216/Reviewer_CFgK"
],
[
"ICLR.cc/2025/Conference/Submission11216/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11216/Reviewer_ryjZ"
],
[
"ICLR.cc/2025/Conference/Submission11216/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11216/Area_Chair_5Mu2"
],
[
"ICLR.cc/2025/Conference/Submission11216/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11216/Reviewer_Equd"
],
[
"ICLR.cc/2025/Conference/Submission11216/Reviewer_gpX7"
],
[
"ICLR.cc/2025/Conference/Submission11216/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"The paper presents Banyan, an efficient and lightweight framework for representation learning on low resource languages. The method has been evaluated in common English tests as well as a series of low-resource languages. This method demonstrates great performance as well as remarkable efficiency. While the technical innovations are impressive, there are still several remaining concerns.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This proposed method is technically solid, and makes substantial improvement compared with existing methods.\\nThe proposed method is highly efficient and lightweight, especially when compared with Large language models.\", \"weaknesses\": \"Some evaluation parts still need further validation. The details are described in questions\", \"questions\": \"1. Regarding low resource language evaluation, the authors only used four languages, while there are many other languages in the released semeval dataset, is there any cherry-picking on the languages evaluated ? How will the model perform on other low resource languages ?\\n2. The authors trained Banyan on datasets from Leipzig Corpora Collection, is there any overlap between the training corpora and the testing dataset ? and if the baseline methods such as XLM-R are also pre-trained on these corpora, how will the models perform ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your review and positive feedback on our paper! We have edited the manuscript to introduce the task earlier and will be adding an appendix containing the scale analysis you requested. A brief overview of the analysis so far is:\\n\\nWe usually run at batch size 512 (though larger is possible). Each tree covers a sequence of an average length of 30 and therefore has 29 non-terminal nodes. Non-entangled this means that the collection of tree would have about 33k nodes, while the entangled tree consists of about 18k so we have roughly a 50% duplicate rate. Interestingly the amount of reuse for a node follows a roughly Zipfian distribution which means that this law applies also to consituent structures. \\n\\nWe appreciate your constructive feedback, if there are any further questions you would like us to answer please let us know!\"}",
"{\"summary\": \"This paper introduces a new recursive graph neural network for learning text representations in low-resource languages: Banyan (like the tree). This model extends previous work by building nested trees over sequences that share the same tokens. In Banyan the same tokens will have the same tree node, even if it comes from different sequences. For scalability reasons, the trees are constructed from a batch of samples rather than from an entire dataset. Embeddings are learned from a simplified message passing algorithm that traverse the trees both in bottom-up and top-down directions.\\nHaving nested trees provides multiple advantages, notably the reduction of duplicated nodes and multiple context representations within the same node (in downward embeddings).\\nThese advantages translate to strong semantic representations in both English (when compared to RoBERTa) and lower-resourced languages (when compared to XLM-R, Llama 3.1, Mistral).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper introduces a novel recursive model that learns textual representations and its learning mechanism.\\nThe proposed architecture is novel and seems promising as it yields good results when compared to other more classical methods.\\nIn addition, the proposed method is very efficient: it requires very little training and has only 14 non-embedding parameters.\", \"weaknesses\": \"The task being tackled is not clear from the abstract or the introduction. The motivation is well described (learning powerful text representations for under-resourced languages), but the task used to evaluate the Banyan model (Semantic Textual Similarity - STS task) is not described. The paper could gain clarity by mentioning the task earlier.\\n\\nEvaluation is based on cosine similarity between sentences compared to human judgment. The paper would attract greater significance if the representations from the proposed model were tested in targeted applications such as sentiment classification or machine translation.\", \"questions\": \"Previous approaches used 1 tree per sequence. Banyan uses one tree for all sequences sharing the same tokens in a batch.\\nSome analysis of scale would be nice to complement Figure 4 in the paper. For instance, how many trees do you usually have in an entire batch? How many (sub-)sequences are represented in 1 tree?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your feedback regarding the paper, we have updated the manuscript as follows:\\n\\n**Terminological Consistency and Citation Format:** Thank you for pointing these out, we have edited them to be consistent in the updated manuscript.\\n\\n**Composition for efficiency and generalisation:** \\nThank you for pointing this out; we have included the following relevant references in the updated manuscript. The notion that the principle of compositionality allows humans to generalise is a long-standing one, sometimes formally referred to as systematic compositionality (Fodor & Pylyshn, 1988) [1], and involves the ***infinite use of finite means*** (Chomsky, 1965) [2].\\nIn more recent years, this has been the subject of a range of papers including formalisation (e.g. Wiedemer et al 2023) [3], analysis (e.g. Ito et al, 2022) [4], and modelling (e.g. Lake et al, 2017) [5].\\n\\n**Entangling Criteria:** The criterion for entangling trees together is when they both contain instances of the same node (e.g. \\u2018some are born to\\u2019 as illustrated in Figure 2). The benefits include both a significant increase in memory efficiency (see section 5.3), as well as explicitly tying together higher order nodes that have the same constituency structure (225-239 for description and 5.4 for ablation of effectiveness).\\n\\n**Prior Tree Models:** Thank you for pointing out this reference; we have included it in our related work.\\nHowever, we note that their innovation appears to be in the task specific application and not the architecture itself. The model is in fact derived from the work Socher et al. [6,7] , and is identical to (and preceeded by) the IORNN [8]---all of which we cite in our work. The work done in Ji and Eisenstein, 2015 does not preclude any of the contributions in this current work. Note also, that Self-StrAE baseline we use already outperforms the IORNN and Tree-LSTM [9] and Banyan is a significant improvement over Self-StrAE. \\n\\n**Missing Recent LLMs:** We included **Llama 3.1 and Mistral Nemo** which at time of writing were very new releases, but we are happy to include additional LLMs. Which large language models in particular do you have in mind? \\n\\n\\nDo let us know if you have any additional concerns regarding the paper, we look forward to engaging with you during the discussion period!\", \"references\": \"[1] Jerry A. Fodor and Zenon W. Pylyshyn. Connectionism and cognitive architecture: A critical\\nanalysis. Cognition, 28(1):3\\u201371, March 1988\\n\\n[2] Noam Chomsky. Aspects of the Theory of Syntax, 1965\\n\\n[3] Compositional Generalization from First Principles\\nThadd\\u00e4us Wiedemer, Prasanna Mayilvahanan, Matthias Bethge, Wieland Brendel, 2023\\n\\n[4] Compositional generalization through abstract representations in human and artificial neural networks\\nTakuya Ito, Tim Klinger, Douglas H. Schultz, John D. Murray, Michael W. Cole, Mattia Rigotti, 2022\\n\\n[5] Building machines that learn and think like people\\nBrenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, Samuel J Gershman, 2017\\n\\n[6] Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning.\\nSemi-supervised recursive autoencoders for predicting sentiment distributions. In Empirical\\nMethods in Natural Language Processing (EMNLP), pp. 151\\u2013161, 2011.\\n\\n[7] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and\\nChristopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank.\\nIn Empirical Methods in Natural Language Processing (EMNLP), pp. 1631\\u20131642, 2013.\\n\\n[8] Phong Le and Willem Zuidema. Inside-outside semantics: A framework for neural models of\\nsemantic composition. In NIPS 2014 Workshop on Deep Learning and Representation Learning,\\n2014\\n\\n[9] Mattia Opper, Victor Prokhorov, and Siddharth N. Strae: Autoencoding for pre-trained embeddings\\nusing explicit structure. In Proceedings of the 2023 Conference on Empirical Methods in Natural\\nLanguage Processing, pp. 7544\\u20137560\"}",
"{\"summary\": \"This paper presents a strategy of representation learning by utilizing structural information discovered during learning. The proposed work is built upon a prior work with two specific pieces of improvement on building the structures and propagating the information during representation learning. Empirical evaluation was performed on multiple NLP tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed new strategies for constructing tree structures are intuitive\", \"The proposed framework is parameter efficient and easy to learn.\"], \"weaknesses\": [\"The writing of this paper can be significantly improved. There are some inconsistent statements and inconsistent terminology, which should be easily fixed after proofreading. For example, the paper uses both \\\"under resourced languages\\\" and \\\"under represented languages\\\", which I think it should be \\\"low-resource languages\\\".\", \"There are some unsupported claims, for example, in the first section, the claim \\\"It is thought that this principle lets humans generation ...\\\" should be supported with references from prior work.\", \"There are also some technical details that are unclear in the paper. For example, in line 204 - 207, what are criteria of entangling different trees together, and what are the benefits of entangling subtrees together?\", \"There is a minor issue with the citation format; it should be, for example, (Tai et al., 2015).\", \"There are some existing works along the line of upward and downward along a tree structure to learn representation, which are not discussed in this paper. For example, Ji and Eisenstein, One Vector is Not Enough: Entity-Augmented Distributed Semantics for Discourse Relations, 2015.\", \"The experiments are not sufficient; for example, some recent large language models are not included in the comparison.\"], \"questions\": \"Please refer to the comments in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer Equd,\\n\\nWe hope our response has addressed your concerns, please let us know if you have any further questions or recommendations for us to respond to. Given that we are reaching the end of discussion period we want to make sure we have time to incorporate your feedback. \\n\\nBest, \\n\\nThe Authors\"}",
"{\"title\": \"acknowledgement\", \"comment\": \"thanks for addressing my comments. No further questions at this time.\"}",
"{\"comment\": \"Thank you for your feedback on our paper, recognising the efficiency and improvements that our method delivers! We would like to address your concerns in our response:\\n\\n**How are embeddings initialised:** We initialise randomly from a uniform distribution and update the embeddings during training. This means that tokens that can frequently be merged together have their representations drawn together. In turn this leads to regularities emerging in the structure which the model can exploit for better reconstruction. Despite the fact that the model is randomly initialised this approach leads to consistent patterns forming and performance is stable across initialisations. We have attempted to highlight this in lines 136-141, please let us know if anything remains unclear!\\n\\n\\n**Parameter Scale:** The primary goal of our research is to test whether inductive biases can help make learning more resource efficient. The low number of parameters enhances the effect of the biases and helps drive efficiency. We agree that the question of whether the method can be applied to transformers is interesting, but we think it falls outside of the current scope of our work. Particularly because one of the advantages of Banyan is that it can be run very cheaply. However, we have added a section to the conclusion where we discuss potential avenues for incorporating the technique with transformers (lines 501-506), because we definitely agree it is an interesting question for future work. Please let us know if you think this is an acceptable compromise and if there are any further questions or suggestions you have for the section. \\n\\n\\n**Which embeddings are used:** Currently we only use the up embedding as this represents the semantics of the span. However there are a lot of potential applications involving the down embeddings such as NER, IR etc. which could be investigated in future. \\n\\nIf you have any further questions or concerns please let us know. We are committed to engaging with you throughout the rebuttal process and appreciate your feedback! If you feel we have adequately addressed your concerns please consider raising your score.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"metareview\": \"This paper proposes a recursive autoencoder for learning text representations. The method works by recursively merging adjacent embeddings from bottom up to build a tree, and then top down splitting to reconstruct leaf embeddings. Experiments on semantic text similarity demonstrate the effectiveness of the proposed approach.\", \"strengths\": \"1. The method itself is simple and provides an alternative to existing representation learning methods by focusing on structures.\", \"weaknesses\": \"1. The evaluation is mainly conducted using semantic text similarity but not directly downstream applications such as machine translation or sentiment analysis, or better yet, the GLUE benchmark which is often used for evaluating representation learning methods.\\n\\nOverall, while this is a very interesting method, the evaluation itself is weak. I think this paper can be significantly improved by evaluating it on downstream applications. I'm recommending rejection for the current version, but I wouldn't mind if the paper gets accepted.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers' questions are clarification questions that have been addressed by authors. However, reviewer ryjZ's point on limited evaluation has not been addressed yet and I think that review provides constructive feedback that can be incorporated into the next version of this work.\"}",
"{\"comment\": \"Thank you for your feedback regarding our paper! We are glad that you found our \\u2018technical innovations impressive\\u2019 and recognise the \\u2018great performance and remarkable efficiency\\u2019 of our method. We also appreciate the concerns you raised and agree that addressing them will help to significantly strengthen our work. We have updated the manuscript and provide a general response below:\\n\\n\\n**More Languages:** We do not cherry pick! Our choices were based on representing a spectrum in terms of \\u2018well resourcedness\\u2019 and where it looked like the test sets had reasonably good annotation. Nonetheless we recognise the concern and have expanded our evaluation in section 5.2 to additionally include Indonesian, Arabic, Telugu, Marathi, Moroccan Arabic, Kinyarwanda and Hausa. The pattern remains largely the same as before. Generally speaking, the more low-resource the language, the more favourable the comparison becomes for Banyan. Please see results and further discussion in the updated manuscript. \\n\\n\\n**Possible Data Leakage:** We have checked whether any test sentences appear in the pretraining corpora using lexical overlap. We did not find any exact matches or significant outliers that might indicate leakage between the two. If there are further tests you would like us to run please let us know, and we would be more than happy to do so! \\n\\n**Finetuning XLM-R:** We have finetuned XLM-R on the same corpora we used to train Banyan and report results as a further baseline in section 5.2. While finetuning does improve performance (sometimes quite significantly) Banyan remains better overall, particularly in the low resource settings. We would also like to add that Banyan can be trained on a single A40 in under an hour, while a reasonable finetune of XLM-R requires 4xA40s and between 10-16 hours depending on how out-of-distribution the language is for the tokenizer. As finetuning XLM-R is such a comparatively compute intensive process we are continuing to work on adding further random seeds to the evaluation and will be updating with standard deviations throughout the rebuttal phase. However, we ask your patience as we are on a compute constrained academic budget. \\n\\n\\nIf there are any further questions that you would like us to respond to please let us know! If you feel we have adequately addressed your concerns please consider increasing your score. Thank you for your time and constructive feedback!\"}",
"{\"summary\": \"In this work, the authors present a method called \\\"Banyan\\\" that can learn the semantic representations with hierarchical structure. It has two main improvements (entangled tree structure and diagonalized message passing functions) comparing to SELF-STRAE. According to their experimental results, Banyan achieves competitive performance with transformer based baselines. It also shows the low cost and efficiency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method is a novel approach for semantic representation. Tree structure is inject into the representation.\\n\\n2. According to the experiments results (Table 1 and Table 2), Banyan achieves competitive performance with transformer based baselines on sentence level and work level. The results are also much better than previous baseline (Self-StrAE).\\n\\n3. There are a clear ablations study on the effect of different modeling changes. It is helpful for others to understand the method.\", \"weaknesses\": \"1. For this proposed semantic representations, both the structure and representation are learned. The first embedding tokens are used to determine which tokens to merge and in what order. The initial token embeddings have a large impact on the proposed method. It is a little unclear about this part (are they randomly initialized? Are they updated during model training? etc)\\n\\n2. Only 14 non-embedding parameters are used in the proposed method. It could also limit the ability the proposed model. It would be great if the proposed method can be used in the transformer-based embedding in the future.\", \"questions\": \"There are composition and decomposition (corresponding up and down embeddings), what are the embedding used for sentence-level and word-level?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for your reply, looks better now.\"}",
"{\"comment\": \"Dear Reviewer CFgK,\\n\\nWe hope our response has addressed your concerns, please let us know if you have any further questions or recommendations for us to respond to. Given that we are reaching the end of discussion period we want to make sure we have time to incorporate your feedback. \\n\\nBest, \\nThe Authors\"}"
]
} |
EBaMTeWi2K | PLAY2PROMPT: Zero-shot Tool Instruction Optimization for LLM Agents via Tool Play | [
"Wei Fang",
"Yang Zhang",
"Kaizhi Qian",
"James R. Glass",
"Yada Zhu"
] | Large language models (LLMs) are increasingly integrated with external tools to complete user requests. Many real-world applications require LLMs to use specialized tools in a zero-shot setting. To achieve this, current methods primarily rely on prompting LLMs with tool-specific information, yet tool documentation is often underspecified or noisy, limiting effectiveness. Manual improvements are inefficient and impractical, as they require domain expertise to rewrite documentation and test on carefully curated held-out datasets to evaluate performance gains. Automatic prompt engineering techniques are not applicable either, because they require labeled examples, which is unavailable in the zero-shot setting. In this work, we introduce PLAY2PROMPT, an automated framework that iteratively refines tool documentation and generates usage examples. PLAY2PROMPT enables LLMs to explore tool input-output behaviors, allowing us to effectively search the space of possible tool descriptions and examples. The generated examples not only guide LLM inference but also serve as validation data to ensure more effective tool use. Extensive experiments on real-world tasks demonstrate significant improvements in zero-shot tool performance across both open- and closed-source models. | [
"large language models",
"zero-shot tool use",
"prompt optimization"
] | https://openreview.net/pdf?id=EBaMTeWi2K | https://openreview.net/forum?id=EBaMTeWi2K | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zJr0CKJkuT",
"vacT7kNpHp",
"sUOtQhSMrd",
"XjDVLMBhDk",
"WtZijQpkbx",
"QE8BPl24cC"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1730712616481,
1730542664324,
1730552783709,
1730707050574,
1732568725111,
1731237242166
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13080/Reviewer_xXT2"
],
[
"ICLR.cc/2025/Conference/Submission13080/Reviewer_6tG1"
],
[
"ICLR.cc/2025/Conference/Submission13080/Reviewer_m9z7"
],
[
"ICLR.cc/2025/Conference/Submission13080/Reviewer_5g5W"
],
[
"ICLR.cc/2025/Conference/Submission13080/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13080/Reviewer_WpAJ"
]
],
"structured_content_str": [
"{\"summary\": \"The manuscript describes a technique dubbed *play2prompt* which consists of two algorithmic proposals of improving the baseline use of tools by LLMs. The particular algorithm changes are complementing the baseline ReAct, a very well known and established algorithm for multi-step reasoning, including multi-step tool use. At a high-level, the proposed changes help the LLM make better use of a tool. This is done by using additional compute (e.g. MC sampling, BeamSearch sampling, Self-reflection) to identify description problems (e.g. tool parameters, general description of how to use the tool, etc) and construct some demonstrations instead of using the tool as zero-shot.\\n\\nThe results are mainly reported on StableToolBench, a recently introduced benchmark with many API call examples. Seemingly the benchmarks provides a way to work around offline service and use an API simulator. The models studied involve LLaMA-3-8B, LLaMA-3-70B and GPT-3.5. The authors report improvements for every models and category of the benchmark.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The main strengths of the manuscript consists of\", \"Addressing a real-world challenging problem, i.e. tools have a lot of variability in documentation, argumentation description and the models tend to struggle with using them outside well-studied domains.\", \"Leveraging a benchmark with many API calls, thus reflecting fairly well the problem discussed. Showcasing improvements on 2 model classes, one open-source (Llama) while and another one closed-source (GPT-3.5). At least two sizes as well for OSS (8B, 70B)\", \"Rather clean description of the algorithms involve in refining and improving the the tool use.\"], \"weaknesses\": \"I think there is insufficient novelty in this work. To ground my feedback, I suggest these angles for discussion or improvements:\\n* The choice of baseline challenges the innovation brought by the manuscript. In practice, ReAct is very well-known at this point, so is the fact that zero-shot performance for Tool-use is not ideal. The authors do describe related work in auto-prompting, but do not make use of any method to create few-shot examples for tools to compare against. Besides the ones from the authors in Related Work, some other to call out are ART [1], DSPy [2].\\n\\n* The benchmark is rather recently introduces and unfortunately, in my opinion, it dilutes the manuscript's claimed contribution. In particular, it is difficult to state whether the type of problems illustrated in Figure 2 (e.g. tools description with wrong arguments) are more frequently encountered in this benchmark. To concretely call out some others, could be Mint [3], APIBench [4] \\n\\n[1] https://arxiv.org/abs/2303.09014\\n[2] https://github.com/stanfordnlp/dspy\\n[3] https://arxiv.org/abs/2309.10691\\n[4] https://arxiv.org/abs/2305.15334\", \"questions\": \"Looking forward to discuss weakenesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces PLAY2PROMPT, an automated framework that enhances the ability of large language models (LLMs) to utilize tools effectively in zero-shot settings. The framework iteratively refines tool documentation and generates example tool usage demonstrations, allowing LLMs to explore tool functionalities without relying on external examples. PLAY2PROMPT employs a search-based trial-and-error approach augmented with self-reflection, enabling interaction with tools and iterative improvements to tool descriptions and demonstrations. The paper demonstrates PLAY2PROMPT's effectiveness through extensive experiments on real-world tasks, showing significant improvements in zero-shot tool performance across open- and closed-source models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. PLAY2PROMPT tackles the problem of optimizing tool usage in LLMs without relying on labeled data, which is a novel approach in the field of natural language processing and AI tool integration.\\n2. By focusing on zero-shot learning, PLAY2PROMPT extends the application of LLMs to new domains where labeled data may be scarce or non-existent, which is a significant contribution to the adaptability of LLMs.\", \"weaknesses\": \"1. There is a concern that the baseline(ReAct) and results(PLAY2PROMPT) in the experiments might be too weak, particularly given that the current SOTA on the [StableToolBench benchmark is around 70%](https://github.com/THUNLP-MT/StableToolBench?tab=readme-ov-file#model-experiments-results). For GPT-3.5-Turbo-1106, the result on StableToolBench is 62.2\\u00b10.8. I refer to the results from StableToolBench github repo. If this is an unreasonable comparison, please explain in detail.\\n2. There are 2 examples in Figure 2 and Figure 3 about single-tool scenarios, please provide some examples for multi-tool scenarios.\", \"questions\": \"1. PLAY2PROMPT used a score reward `r_t` to evaluate the quality of each state, during optimization. Please provide a visualization or quantitative analysis of how the reward r_t changes over iterations\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents PLAY2PROMPT, a novel zero-shot tool instruction optimization framework designed for large language model (LLM) agents to improve tool use. The approach emphasizes \\\"tool play,\\\" where the LLM explores the input-output behavior of tools iteratively, thereby generating optimized descriptions and usage examples without labeled data. PLAY2PROMPT refines tool documentation and validates tool usage by enabling LLMs to \\\"play\\\" with tools to learn effective usage patterns. The authors evaluate this framework on the StableToolBench benchmark, showing significant performance improvements for both open- and closed-source LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The approach of having LLMs \\u201cplay\\u201d with tools to learn how to use them is a fresh take on tool optimization in a zero-shot setting. It\\u2019s creative, especially since it doesn\\u2019t require labeled data, just exploration and self-reflection.\", \"The paper backs up its claims with extensive testing and solid methodology. The authors use structured search techniques and break down the process into distinct steps for refining tool documentation and generating example usage, which keeps it organized and effective.\", \"The paper is generally easy to follow and well-structured. It clearly explains the framework, breaking down each component, so it\\u2019s straightforward to see how PLAY2PROMPT works.\"], \"weaknesses\": [\"The method works well for single-tool scenarios, but it doesn\\u2019t yet support multi-tool use, which limits it in cases where tasks require switching between tools or combining multiple tools.\", \"Beam search, while effective, could be computationally demanding, particularly for larger models. Testing alternative search strategies might improve efficiency without sacrificing performance.\", \"The framework\\u2019s benefits vary with model size, and larger models seem to gain more from PLAY2PROMPT. Smaller models might not generalize as well, suggesting they may need more tailored adjustments.\", \"The main metric used, solvable pass rate, gives a basic view of improvement but doesn\\u2019t dive into specific types of failure cases, like whether certain tasks consistently remain unsolved.\"], \"questions\": \"How much does PLAY2PROMPT rely on the base model\\u2019s initial understanding of tools? Would smaller, less powerful models need extra tweaks to achieve similar improvements?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper addresses the critical issue of low-quality tool documentation when LLMs utilize external tools, which can significantly impact model performance. The authors propose Play2Prompt that employs beam search to generate multiple candidate documentations and demonstrations, selecting optimal ones through evaluation. Empirical results on StableToolBench demonstrate that the proposed method enhances the performance of various models, including Llama-3-8b, Llama-3-70b, and GPT-3.5-turbo.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper tackles a fundamental challenge in tool utilization of LLMs: the quality of tool documentation, which is crucial for effective model-tool interaction.\\n2. The proposed method demonstrates remarkable effectiveness and robustness, achieving significant improvements using Llama-3-8b rather than requiring more computationally intensive models like GPT-4, highlighting its practical applicability.\\n3. The comprehensive analysis of the impact of tool documentation and demonstrations provides valuable insights for future research directions.\", \"weaknesses\": \"1. The reward computation relies on LLM-based evaluation by prompting, raising concerns about the reliability of the search process. This dependency on LLMs for evaluation may introduce biases or inconsistencies in the quality assessment of generated documentations as LLMs may generate inaccurate scores.\\n2. The experimental evaluation would be more convincing with validation across additional datasets beyond StableToolBench to demonstrate broader applicability (e.g., BFCL[1], ToolQA[2], etc).\\n3. In the Ablation on Search Strategies section, it claims that the MC variant does not explore the search space well enough to reach better states. Given the widespread success of Monte Carlo Tree Search (MCTS) in LLMs[3,4], a comparative analysis between MCTS and the proposed beam search approach would provide valuable insights into their relative effectiveness for this specific application.\\n4. The paper would benefit from a clearer presentation, specifically through the inclusion of a high-level flowchart or algorithmic pseudocode before Methodology Section to enhance reader comprehension.\\n\\n[1] Berkeley function calling leaderboard. https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html\\n\\n[2] Yuchen Zhuang, et al., ToolQA: A Dataset for LLM Question Answering with External Tools. https://arxiv.org/abs/2306.13304\\n\\n[3] Shibo Hao, et al., Reasoning with Language Model is Planning with World Model. http://arxiv.org/abs/2305.14992\\n\\n[4] Weimin Xiong, wt al., Watch Every Step! LLM Agent Learning via Iterative Step-Level Process Refinement. http://arxiv.org/abs/2406.11176\", \"questions\": \"See the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"The authors acknowledge the challenge of zero-shot tool use for LLMs given the noisy nature of LLM documentation. The authors introduce, \\\"PLAY2PROMPT\\\" which is a system for LLMs to explore tool-use settings in simulation enabling them to refine the tool descriptions and examples. With Play2Prompt the authors are able to demonstrate 3-6\\\\% points improvements for open-source (LLAMA) and closed-source (GPT-3.5) LLMs on StableToolBench.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a comprehensive and complete technique to address what is often not focused upon - improving documentation for tools such that the LLM knows then to invoke them. This is fundamental to improve the LLM's ability calls tools in a zero-shot setting.\\n\\n2. The technique of first sampling the tool invocation first (followed by rejection sampling), and then generating the corresponding query x and answer y, is a smart technique to introduce variance in the query distribution.\", \"weaknesses\": \"1. From the definition of the tool, v = (w, y, i) which includes the question, answer, and the invocation, the authors do not include the \\\"environment\\\" that the tool \\\"invocation\\\" is part of. While I understand the benefits of isolating the tool call from the environment, I wonder if by not capturing the information at all - would the system be loosing information relevant to adjudicate if the tool call is correct or incorrect?\\n\\n2. In line 213, for generating the query \\\"x\\\", I'm wondering if the team had an ablation for 3-shot in-context prompt based query generation would be sufficient - or in other words, what's the benefit from the seemingly compute intensive technique of self-reflection and refinement? \\n\\n3. How would the system handle multi-step or multi-turn tool calls? From line 285, the authors mention \\\"an API service represents a tool that contains multiple sub-tools, with each sub-tool corresponding to f in our definition.\\\" which I would read as ~ creating single-step tool-call proxy for multiple tool calls? Which isn't true multi-step / single-step call?\", \"questions\": \"1. I'm curious on how using the definition in Section (2), would nested tool-call or a sequence of tool-call be supported? For example, if f = (u, I, g), would a nested tool call be g_1(g_o) in which case what would the description (u) be? A combination of the two calls, or a totally new call?\\n2. StableToolBench is a comprehensive benchmark and it is promising to see improvement performance. Does this generalize well to other zero-shot benchmarks as well, such as the Berkeley Function Calling Leaderboard? \\n3. Line 172: \\\"Specifically, given a state st, an action at is generated by sampling from an optimization model M,\\nconditioned on the input-output information obtained from tool interactions.\\\" Why would the action \\\"at\\\" be determined by the optimization Model (M) and not a deterministic system?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
EBT0oymkZb | Towards Zero-Shot Generalization in Offline Reinforcement Learning | [
"Zhiyong Wang",
"Chen Yang",
"John C.S. Lui",
"Dongruo Zhou"
] | In this work, we study offline reinforcement learning (RL) with zero-shot generalization property (ZSG), where the agent has access to an offline dataset including experiences from different environments, and the goal of the agent is to train a policy over the training environments which performs well on test environments without further interaction. Existing work showed that classical offline RL fails to generalize to new, unseen environments. We propose pessimistic empirical risk minimization (PERM) and pessimistic proximal policy optimization (PPPO), which leverage pessimistic policy evaluation to guide policy learning and enhance generalization. Theoretically, our framework is capable of finding a near-optimal policy with ZSG. Empirically, our framework demonstrates the ability to enhance the performance of the base offline RL methods. Our result serves as a first step in understanding the foundation of the generalization phenomenon in offline reinforcement learning. Our codes are released at [this link](https://anonymous.4open.science/r/ProcgenExp-B5B4). | [
"offline reinforcement learning",
"generalization"
] | Reject | https://openreview.net/pdf?id=EBT0oymkZb | https://openreview.net/forum?id=EBT0oymkZb | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"vYvpRQatMf",
"uchbKMgwRj",
"nC4a0AQBoR",
"mNiOKWZvNx",
"kAVn1V2cnH",
"iz4ig9cJMQ",
"esZXxp6VG5",
"d2vvBW4rRZ",
"cUGh3c2wFV",
"ZazS8kz1ON",
"XjKB2M1yCo",
"Wq3ZqK2IBo",
"VyVlAELjwu",
"Tgupi4VITG",
"SvpwYaIsvl",
"OrtLJQJYIh",
"NQITJL7OmZ",
"NPrYCUP9Uy",
"K4nD0msbUx",
"IA7flYCUNk",
"FK6n2Ox8T6",
"EvmDtFaEPJ",
"C4BU1M1CZ9",
"9B10a7O6Sp",
"6bYwx8LBy1",
"6W3IThA6Gh",
"5gY3bwvrnw",
"3qY85mblYP",
"3pRjoG1YWK",
"36CNvcozJo"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732555837324,
1732555826648,
1732555863849,
1732203791274,
1732205294670,
1732204318638,
1730846436000,
1732473810549,
1732204682866,
1732689559864,
1732203430021,
1733086080557,
1730465975185,
1732901105974,
1737524123610,
1732473106783,
1730324349284,
1732204041185,
1732555852009,
1732203912179,
1732282253224,
1732705952656,
1732217244405,
1730700229773,
1734780646018,
1732742176454,
1732204247470,
1733204035365,
1732662049138,
1733086019407
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Reviewer_15Pv"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Reviewer_YdC5"
],
[
"ICLR.cc/2025/Conference/Submission11426/Reviewer_YdC5"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Reviewer_i6Q2"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Reviewer_YdC5"
],
[
"ICLR.cc/2025/Conference/Submission11426/Reviewer_i6Q2"
],
[
"ICLR.cc/2025/Conference/Submission11426/Reviewer_i6Q2"
],
[
"ICLR.cc/2025/Conference/Submission11426/Reviewer_WMvw"
],
[
"ICLR.cc/2025/Conference/Submission11426/Area_Chair_kH1t"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11426/Reviewer_WMvw"
],
[
"ICLR.cc/2025/Conference/Submission11426/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Thank you for your valuable comments. With the ICLR rebuttal phase deadline approaching, we would greatly appreciate any additional feedback or concerns you may have.\"}",
"{\"comment\": \"Thank you for your valuable comments. With the ICLR rebuttal phase deadline approaching, we would greatly appreciate any additional feedback or concerns you may have.\"}",
"{\"comment\": \"Thank you for your valuable comments. With the ICLR rebuttal phase deadline approaching, we would greatly appreciate any additional feedback or concerns you may have.\"}",
"{\"title\": \"Thank you for the comments (Part I)\", \"comment\": \"We sincerely thank you for your valuable feedback and suggestions. Our responses are as follows, which we hope could well address your concerns.\\n\\n**Q1** The context information is included in the offline dataset. But when the agent is evaluated (run in the environment), is the context information available then?\\n\\n**A1** The context information is only required during training on the offline dataset and is not available during evaluation in the environment. During training, the context information is used solely to group trajectories into different categories or environments. It is neither used as input nor relied upon during inference time.\\n\\nImportantly, during training, we do not need the exact values of the context variables. Instead, labels indicating which trajectories belong to different environments ($1, 2, \\\\dots, n$) are sufficient. These labels allow us to differentiate between environments without requiring explicit context values. This approach aligns with our goal of achieving generalization without dependence on direct context information.\\n\\n**Q2** Since the MDP can be arbitrarily different between contexts, how would it be possible to generalize to other contexts without any underlying structure?\\n\\n**A2** The contexts in our setting are drawn from the same distribution \\n$C$ for both the offline training data and the testing evaluation. This shared distribution is essential for enabling generalization across contexts.\\n\\nOur result does not explicitly depend on the size of the context set. Instead, it hinges on the covering number of the function class, as reflected in the $I_1$ term in equation (1) (line 321). This approach aligns with standard supervised learning theory, where generalization bounds depend on the complexity of the hypothesis space (e.g., VC-dimension or covering number) rather than the cardinality of the input space. This ensures that our method generalizes effectively even in cases where the number of contexts is large or infinite. \\n\\n**Q3** If the context variable is included in the dataset and we are only interested in Markovian policies, how is this different from augmenting the state with the context variables and using a standard offline RL algorithm?\\n\\n**A3** The context variables included in the dataset serve primarily as indicators to differentiate which task or environment each trajectory belongs to, rather than containing any semantic information. In our setting, context information is only required during training on the offline dataset and is not available during evaluation. Moreover, the offline dataset does not include explicit context variables but instead provides labels that differentiate trajectories collected from distinct environments. Since these labels do not correspond to explicit context variables, it is not feasible to augment the state with context information and directly apply standard offline RL algorithms.\\n\\n**Q4** Why is $V_{i-1,1}^\\\\pi (x_1)$ not achievable? What does this mean?\\n\\n**A4** By stating that $V_{i-1,1}^\\\\pi (x_1)$\\n is not achievable, we mean that it cannot be directly computed because the ground-truth transition dynamics of the environment are unknown. As a result, we rely on approximations. Specifically, we use a linear approximation method to iteratively update the value estimation based on the previous iteration's results. This approach aligns with standard techniques used in algorithms like PPO, where similar approximations are employed to estimate value functions in the absence of exact dynamics.\"}",
"{\"title\": \"Thank you for the comments\", \"comment\": \"We sincerely thank you for your valuable feedback and suggestions. Our responses are as follows.\\n\\n**Q1** could you provide uncertainty estimates?\\n\\n**A1** Thank you for your insightful suggestion. We have included the uncertainty estimates of mean and median returns in Table 2 of our revised manuscript. We also report the Interquartile Means (IQM) with confidence intervals in the revised manuscript to better illustrate the significance of the observed improvements. \\n\\n\\n**Q2** generally, I am surprised that BC performs so well - not only on the expert datasets where it could be expected, but also on the mixed datasets. Do you have a theory about why that is the case? \\n\\n**A2** We would like to emphasize that BC can outperform offline RL methods in certain cases, as its performance is highly dependent on the quality of the behavior policy. Theoretically, as established in [1], which studies the comparison between BC and offline RL in single-task settings, when the behavior policy is optimal, BC can converge to the optimal policy at a rate of $1/N$, where $N$ is the number of trajectories in the offline dataset. This convergence rate is faster than the $1/\\\\sqrt{N}$ rate achieved by most offline RL methods, such as CQL. Consequently, when the offline dataset is finite (as in our case), BC can indeed outperform offline RL methods. However, when the behavior policy is suboptimal, the comparison between BC and offline RL becomes more nuanced and is influenced by factors such as dataset quality and the degree of suboptimality in the behavior policy. The theoretical results in [1] also extend to our methods, as RL with generalization encompasses the single-task RL setting.\\n\\nIt is important to note that our primary goal is not to assert that our methods will always outperform BC, particularly in finite-sample settings. Instead, our aim is to present a theoretically grounded framework that identifies the failure modes of existing offline RL methods in terms of generalization and demonstrates how our proposed approaches address these issues. While BC may outperform offline RL methods in specific scenarios, our contributions lie in providing a deeper understanding of the limitations of existing methods and proposing solutions that offer better generalization under broader conditions.\\n\\n\\n[1] \\\"When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?\\\"\\n\\n\\n**Q3** Figure 2 shows the differences in performance between IQL baseline and the newly proposed approach. Please add that information in the caption.\\n\\n**A3** Thank you for highlighting this issue. We have clarified it in the revised version of the paper. Specifically, Figure 2 illustrates the performance differences between our proposed IQL-4V approach and the original IQL algorithm on a per-game basis, as measured by min-max normalized scores.\\n\\n**Q4** I believe you could show the merits of your method in a much more convincing way of you chose a baseline that would do this - e.g. use a meta learning approach and don't allow it to fine-tune on the target task.\\n\\n**A4** Thank you for your insightful suggestion. We would like to highlight that we do not allow context information during the test phase, therefore it is impossible to incorporate the context information to boost the performance of baseline algorithms such as vanilla IQL. Regarding the suggestion of using a meta-learning approach, we conducted an additional evaluation using the method proposed by Mitchell et al. (2021), without fine-tuning, to provide a direct comparison with our approach on the Miner and Maze Expert datasets. The results are summarized below:\\n\\n|Procgen game| IQL-4V | MACAW-4Tasks (w/o fine-tuning) |\\n|---------------|------------------|---------------------------|\\n|Miner| $6.36 \\\\pm 1.85$ | $4.0 \\\\pm 0.70$ |\\n|Maze| $5.0\\\\pm 1.26$| $2.4\\\\pm 1.50$|\\n\\nAs outlined in Algorithm 1 (MACAW Meta-Training) by Mitchell et al. (2021), the primary distinction between the training process of our IQL-4V method and the meta-training phase of the MACAW algorithm lies in how the critic gradient updates are handled. In MACAW's meta-training, the gradient updates from individual tasks at each training step are consolidated into a single critic network, guided by the \\\"test batch\\\" for each task. Notably, MACAW consistently maintains only one unified critic network throughout the meta-training process. While both our method and meta-learning approaches share the concept of leveraging multiple optimization objectives during training, our method demonstrates superiority. This advantage stems from our approach\\u2019s ability to maintain multiple value networks throughout training, which enhances the model\\u2019s capacity to effectively capture and utilize diverse contextual information. \\n\\nAgain, we sincerely thank you for your time and constructive feedback. We hope our responses address your concerns, and we welcome any further questions or suggestions.\"}",
"{\"title\": \"Thank you for the comments (Part II)\", \"comment\": \"**Q6** The selection of the baseline methods are not justified, why not compare the proposed methods with SOTA methods?\\n\\n**A6** We choose IQL since it is a well-known method with Markovian policy. In Mediratta et al. (2023), the SOTA methods for comparison are identified as BC, which we have indeed included in our experiments. Our results demonstrate that the variants of IQL proposed in our paper significantly enhance its performance, bringing it close to the level achieved by BC. The primary focus of our work is to explore the mechanisms for improving offline RL performance in a generalization setting. We chose IQL as a representative method because of its suitability for this investigation.\\n\\n\\nAgain, thank you very much for reviewing our work and your positive comments. We are happy to provide further clarifications if needed.\"}",
"{\"summary\": \"This paper focuses on generalization across contexts in offline RL. The authors prove that their proposed algorithms, pessimistic ERM and pessimistic PPO can achieve good regret bounds in this setting. Inspired by the theory, a modified version of IQL is evaluated on offline ProcGen and showcases the benefits of the approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper tackles an important problem, generalization in RL with offline datasets, and takes a theoretical perspective. From a look, the derivations seems sound and generally, enough detail is included to understand.\\nAlso, although the paper has a theoretical focus, it is nice to see some of the principles tried in an empirical setting.\", \"weaknesses\": \"The problem setting requires some clarification. The general setting is contextual MDPs with offline datasets but some key details are unclear.\", \"for_example\": [\"The context information is included in the offline dataset. But when the agent is evaluated (run in the environment), is the context information available then?\", \"There does not seem to be any assumption on context information or any common structure relating the context to the MDP. So, since the MDP can be arbitrarily different between contexts, it's confusing that the bound given by equation (1) (line 335) does not contain a term related to the number of contexts or a related quantity. How would it be possible to generalize to other contexts without any underlying structure?\", \"If the context variable is included in the dataset and we are only interested in Markovian policies, how is this different from augmenting the state with the context variables and using a standard offline RL algorithm?\", \"There are a few concerns about the empirical results and I have some questions in the next section.\", \"I may be missing some important pieces of information concerning the above points and would be willing to revise my score based upon further clarification.\"], \"questions\": [\"Line 385: Why is $V^\\\\pi_{i-1,1}(x_1)$ not achievable? What does this mean?\", \"The objective chosen is the \\\"suboptimality gap compared to the best Markovian policy\\\" (line 164). The best Markovian policy (without context) can be bad on every environment. When using history-dependent policies, it can be possible to infer the context and do much better. Based on this, it's unclear how meaningful the proposed objective is. Perhaps a worst-case bound over contexts would be a more satisfying choice. \\\\\", \"Alternatively, setting where the context variable is revealed in the offline dataset but hidden during evaluation would be interesting to consider. That is, we want to change the objective so that we compare to history-dependent policies. Some additional modifications may be needed so learning is feasible and is different from POMDPs.\", \"Line 257: The presence of an oracle for each individual dataset seems like a strong assumption. Could you elaborate on how one could implement the oracle and what kind of estimates would be feasiable for $\\\\Gamma(s,a)$\", \"For the counterexample CMDP in section 4 (line 235), it seems unreasonable to expect the agent to do well on actions that are never observed ($\\\\mu(a) = 0$). There would be no observed rewards for those actions so the agent has no information. Perhaps the counterexample should be adjusted to account for this and give some positive probability to all actions.\", \"Looking at the table 3, the stochastic policy variant is better but this is not the result reported in table 2 (in the row for Miner)\", \"In table 2, we see the deterministic policy result is reported which is markedly worse.\", \"It would be more fair to report the stochastic policy variant in table 2. Currently, it seems like it is the difference between stochastic and deterministic policies that makes the largest difference and not the additional value networks.\", \"Is the stochastic policy suggested in this work or is it a variant from previous works?\", \"When running IQL-nV, there are more parameters and more capacity in the network. It would be more fair to compare to the 1V version which gets additional parameters to roughly match the ones of nV, n>1.\", \"Methods making use of ensembles of value networks have indeed been used to combat value overestimation issues in various works e.g. [1] and [2]. Any thoughts on parallels between these lines of work and the current one?\", \"[1] \\\"Randomized Ensembled Double Q-Learning: Learning Fast Without a Model\\\" Chen et al. \\\\\", \"[2] \\\"Maxmin Q-learning: Controlling the Estimation Bias of Q-learning\\\" Lan et al.\"], \"typos\": [\"In the algorithm boxes, at the end, should it say \\\"Return\\\" instead of \\\"Ensure\\\"?\", \"line 122: \\\"successive feature\\\" -> \\\"successor feature\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your thoughtful follow-up question.\\n\\n**Q** I would expect your method to outperform BC on the Mixed dataset. In what empirical settings should we expect your method to outperform BC?\\n\\n**A**\\nThank you for your question. We argue that while it might seem intuitive to always prefer offline RL when data quality is suboptimal, the reality is more nuanced. Empirically, we claim that our algorithms are preferred when the data quality is **highly suboptimal**. To further support our argument, we conducted an additional evaluation on the full Procgen benchmark, consisting of 16 games. This evaluation utilized a *suboptimal dataset* instead of the *mixed expert-suboptimal dataset* where the data quality is even worse. To construct the offline dataset, we extracted 1 million transitions from the *25M suboptimal dataset* provided by Mediratta et al. (2023) for each Procgen game, following the same steps in Mediratta et al. (2023). For the evaluation, we compared the performance of the IQL-4V, IQL and BC algorithms, adhering to the same practices outlined in our original paper. The results of the evaluation are as follows:\\n\\n|Procgen game| IQL-4V | IQL|BC |\\n|---------------|------------------|------------------|---------------------------|\\n|Bigfish| $3.03\\\\pm 0.96$ |$1.77\\\\pm 0.06$| $1.73 \\\\pm 0.14$ |\\n|Bossfight| $1.04\\\\pm 0.20$|$0.91\\\\pm 0.12$| $1.06\\\\pm 0.13$|\\n|Caveflyer| $2.01\\\\pm 0.41$|$1.63\\\\pm 0.20$| $1.47\\\\pm 0.14$|\\n|Chaser| $0.42\\\\pm 0.05$|$0.48\\\\pm 0.01$| $0.46\\\\pm 0.01$|\\n|Climber| $1.07\\\\pm 0.06$| $1.02\\\\pm 0.01$| $1.07\\\\pm 0.16$|\\n|Coinrun| $2.10\\\\pm 1.00$| $2.80\\\\pm0.60$|$2.00\\\\pm 0.10$|\\n|Dodgeball| $0.72\\\\pm 0.20$|$0.72\\\\pm 0.16$| $0.61\\\\pm 0.09$|\\n|Fruitbot| $-1.06\\\\pm 0.07$|$-0.16\\\\pm 0.06$| $-2.53\\\\pm 0.02$|\\n|Heist| $0.80\\\\pm 0.01$|$0.65\\\\pm 0.15$| $0.30\\\\pm 0.01$|\\n|Jumper| $1.70\\\\pm 0.10$|$ 1.35\\\\pm 0.35$| $1.20\\\\pm 0.10$|\\n|Leaper| $4.10\\\\pm 0.10$|$3.75\\\\pm 0.05$| $3.40\\\\pm 0.30$|\\n|Maze| $1.25\\\\pm 0.45$|$1.25\\\\pm 0.25$|$1.30\\\\pm 0.40$|\\n|Miner| $0.14\\\\pm 0.01$|$0.12\\\\pm 0.02$| $0.15\\\\pm 0.03$|\\n|Ninja| $1.20\\\\pm 0.10$|$1.15\\\\pm 0.15$| $1.35\\\\pm 0.15$|\\n|Plunder| $2.95\\\\pm 0.80$|$2.26\\\\pm 0.30$| $2.63\\\\pm 0.04$|\\n|Starpilot| $4.20\\\\pm 0.12$|$3.89\\\\pm 0.32$| $4.55\\\\pm 0.28$|\\n|Mean|$-0.155\\\\pm 0.047$|$-0.163\\\\pm 0.040 $|$-0.179\\\\pm 0.042$|\\n|Median|$-0.088\\\\pm 0.074$|$-0.092\\\\pm 0.063$|$-0.074\\\\pm 0.066$|\\n|IQM|$-0.079\\\\pm 0.016$|$ -0.099\\\\pm 0.022$|$-0.108\\\\pm 0.022$|\\n\\n\\nThe results show that IQL-4V generally outperforms BC for **highly suboptimal data**. We hope this response addresses any concerns you may have. We kindly request that our work not be judged based on goals beyond its current scope\\u2014such as conducting a full comparison with BC or developing a new algorithm that consistently outperforms BC\\u2014or on misaligned expectations regarding the experimental outcomes. Please do not hesitate to reach out if you have any further questions or comments.\"}",
"{\"title\": \"Thank you for the comments\", \"comment\": \"We sincerely thank you for your valuable feedback and suggestions. Our responses are as follows, which we hope could well address your concerns.\\n\\n**Q1** The empirical findings in Table 2 and Figure 2 do not leave the reader confident that a solution has been found--BC's median performance is better than (or similar to) IQL-4V's on both the expert and mixed datasets. Presumably this can be explained with reference to the sub-optimality gap of PERM reported in Table 1, but the authors do not discuss this.\\n\\n**Q2** In practice, why does IQL-4V not outperform BC in Section 6? It would be helpful if you could discuss more thoroughly the limitations in translating the theoretical findings to practical application.\\n\\n**A1&A2** We would like to emphasize that BC can outperform offline RL methods in certain cases, as its performance is highly dependent on the quality of the behavior policy. Theoretically, as established in [1], which studies the comparison between BC and offline RL in single-task settings, when the behavior policy is optimal, BC can converge to the optimal policy at a rate of $1/N$, where $N$ is the number of trajectories in the offline dataset. This convergence rate is faster than the $1/\\\\sqrt{N}$ rate achieved by most offline RL methods, such as CQL. Consequently, when the offline dataset is finite (as in our case), BC can indeed outperform offline RL methods. However, when the behavior policy is suboptimal, the comparison between BC and offline RL becomes more nuanced and is influenced by factors such as dataset quality and the degree of suboptimality in the behavior policy. The theoretical results in [1] also extend to our methods, as RL with generalization encompasses the single-task RL setting.\\n\\nIt is important to note that our primary goal is not to assert that our methods will always outperform BC, particularly in finite-sample settings. Instead, our aim is to present a theoretically grounded framework that identifies the failure modes of existing offline RL methods in terms of generalization and demonstrates how our proposed approaches address these issues. While BC may outperform offline RL methods in specific scenarios, our contributions lie in providing a deeper understanding of the limitations of existing methods and proposing solutions that offer better generalization under broader conditions.\\n\\n\\n[1] \\\"When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?\\\"\\n\\n\\n\\n\\n\\n\\n**Q3** The notation is dense and overloaded, which made following the analysis difficult. \\n\\n**A3** We appreciate your feedback and understand that the dense notation and definitions may have made the paper challenging to follow. The definitions of the value function and Q-function in Lines 153-156 follow standard conventions in reinforcement learning studies. However, we acknowledge that presenting these concepts clearly and concisely is crucial to improving readability. Additionally, we have reviewed the language throughout the paper to improve clarity and fluency.\\n\\n\\n**Q4** I'm unsure what the key contribution is. If a practitioner wanted to build algorithms based on the theoretical results it is, in my opinion, not clear where they should begin.\\n\\n**A4** Our main contributions are summarized in the Introduction Section (Lines 52-80). In essence, we prove why previous offline RL algorithms fail to generalize in the zero-shot generalization (ZSG) setting, and we propose two frameworks, PERM and PPPO, that are shown to achieve ZSG both theoretically and empirically.\\n\\nTo provide practical insights, we have implemented our proposed methods and demonstrated their effectiveness in practice. The implementation is based on the key idea of leveraging multiple value networks to capture variations across environments. Detailed descriptions of our experimental setups and results can be found in Section 6, which we hope can guide practitioners in applying our methods.\\n\\n\\n\\n**Q5** Empirically, our frameworks do not find near-optimal policy with ZSG empirically\\n\\n**A5** We have revised the statement in the abstract to clarify and ensure precision. \\n\\n\\n**Q6** At test-time, how do you select which of the policies to use for an arbitrary unseen context?\\n\\n**A6** We respectfully think there may be some misunderstandings. First, in our IQL-nV implementation, we only have 1 policy. Therefore, we do not need to \\\"choose\\\" policy. Second, we keep $n$ value functions, which are only used for training, not for testing. During the testing, we only evaluate our policy but not the value function.\\n\\n**Q7** Minor feedbacks and typos.\\n\\n**A7** Thank you for your valuable suggestions. We have revised our paper accordingly.\\n\\nMoreover, we have included the link to our code to reproduce the experiments in the revised abstract.\\n\\nAgain, we sincerely thank you for your time and constructive feedback. We hope our responses address your concerns, and we welcome any further questions or suggestions.\"}",
"{\"comment\": \"Thank you for your positive feedback. Again, we sincerely appreciate your support, thoughtful review, and constructive suggestions.\\n\\nBest regards,\\n\\nThe Authors of Submission 11426\"}",
"{\"title\": \"To All Reviewers\", \"comment\": \"Thank you for your comments. We have revised our draft according to your suggestions. We list the main changes here for your reference.\\n\\nFirst, we conducted a new experiment to address the reviewers' concerns, specifically an ablation study that compares IQL-4V with an enhanced version of IQL-1V. In this comparison, the network parameters of IQL-1V were scaled to match the model capacity of IQL-4V (Lines 1445\\u20131457).\\n\\nSecond, we have reported Interquartile Means (IQM) with confidence intervals in addition to mean and median performance metrics to provide a clearer understanding of the significance of the observed improvements (Line 453).\\n\\n\\nThird, we improved the clarity of the manuscript by enhancing figures and captions (notably, lines 455-470). \\n\\nWe hope these revisions address your concerns and enhance the overall quality and clarity of the paper. Thank you again for your valuable feedback.\"}",
"{\"comment\": \"Thank you for your valuable feedback! We greatly appreciate your suggestions and will incorporate them. Below, we address the concern you have raised.\\n\\n**Q**: How about more value networks? Should we expect a better performance? \\n\\n**A**: We would like to address a potential misunderstanding: it is not necessarily true that increasing the number of value networks always leads to better performance. While having more value networks can increase their independence from one another, it also reduces the number of trajectories available to train each network. This observation is further supported by our **Theorem 22**, which provides a real-world suboptimality gap for PERM. For clarity, we restate **Theorem 22** here for reference. For any policy $\\\\pi'$ which is the output of PERM with $m$ number of value networks, we have its suboptimality gap as\\n$$\\\\text{SubOpt}(\\\\pi')\\\\leq \\\\underbrace{2\\\\sqrt{\\\\frac{2\\\\log(6\\\\mathcal{N}\\\\_{(Hm)^{-1}}^\\\\Pi/\\\\delta)}{n}}}\\\\_{I_1: \\\\text{Supervised learning (SL) error}}+\\\\underbrace{\\\\frac{2}{m}\\\\sum\\\\_{j=1}^m\\\\sum\\\\_{h=1}^H\\\\mathbb{E}\\\\_{\\\\pi^*,M_j}{[{\\\\Gamma'}\\\\_{j,h}(s_h,a_h)|s_1=x_1}]}\\\\_{I_2: \\\\text{Reinforcement learning (RL) error}}+ \\\\underbrace{\\\\frac{5}{m}+2 \\\\sup\\\\_\\\\pi | \\\\frac{1}{n}\\\\sum_{i=1}^n V^{\\\\pi}\\\\_{i,1}(x_1)-\\\\frac{1}{m}\\\\sum_{j=1}^m {V'}^{\\\\pi}\\\\_{j,1}(x_1)|}\\\\_{\\\\text{Additional approximation error}}$$\\nOur bound demonstrates that as $m$, the number of value networks, increases, the \\\"additional approximation error\\\" term decreases. However, the \\\"RL error\\\" term may increase because the uncertainty $\\\\Gamma'_{j,h}$ might grow to account for a more diverse average MDP $M_j$. As a result, we cannot claim that increasing the number of value networks will universally improve overall performance.\\n\\nAdditionally, we conducted a new experiment in our ablation study to examine this phenomenon. Specifically, we scaled the number of value functions in IQL-16V to twice that of our earlier IQL-8V ablation study. Below, we present the updated results, which build on Table 3 from our original submission.\\n\\n| Procgen Game | 16V-SP (Expert) | 8V-SP (Expert) | 4V-SP (Expert) |\\n|--------------|-----------------|----------------|----------------|\\n| Miner | $7.04 \\\\pm 0.27$ | $7.88 \\\\pm 0.71$ | $6.36 \\\\pm 1.85$ |\\n\\nFrom the new results, we observe that IQL-16V actually performs worse than IQL-8V. This finding further suggests that selecting a larger number of value networks does not always result in better performance.\"}",
"{\"summary\": \"This paper studies zero-shot generalisation (ZSG) to unseen environment contexts in offline RL. The authors perform extensive theoretical analyses to establish why standard offline RL techniques perform poorly in this setting. They propose two idealised algorithms (PPPO and PERM) and provide bounds on the sub-optimality of these methods w.r.t. an optimal policy for ZSG. They conclude by providing a practical instantiation of their idealised algorithms based on IQL and evaluate its ZSG performance on procgen.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The problem setting is an important, unexplored area. To the best of my knowledge, this is the first work that provides a theoretical analysis of zero-shot generalisation to unseen environments in offline RL.\", \"The authors' theoretical analysis is meticulous and extensive.\", \"The authors supplement their theoretical analysis with empirical results on the relevant benchmark provided by Mediretta et al. (2023)\"], \"weaknesses\": [\"Mediretta et al. (2023)'s work showed that behaviour cloning (BC) zero-shot generalised to unseen environments better than standard offline RL methods. The goal of this paper, in effect, is to establish the failure mode of offline RL methods in this setting and propose new methods that remedy it. However, the empirical findings in Table 2 and Figure 2 do not leave the reader confident that a solution has been found--BC's median performance is better than (or similar to) IQL-4V's on both the expert and mixed datasets. Presumably this can be explained with reference to the sub-optimality gap of PERM reported in Table 1, but the authors do not discuss this.\", \"I found the paper difficult to follow. The notation is dense, and in my opinion, overloaded (e.g. the definitions of state, and state-action value functions in Lines 150-153), which made following the analysis difficult. At times the English is poor, but I'm sensitive to the fact that it may not be the authors' first language. After several read-throughs, I'm left unsure what the key contribution is (there are many disparate contributions). If a practitioner wanted to build algorithms based on the theoretical results it is, in my opinion, not clear where they should begin. NB: this is not a value-judgement on the quality of the theoretical analysis, but it does affect how easily others can build upon the authors' findings.\", \"**Minor feedback**\", \"In Line 19 you say that \\\"our frameworks find the near-optimal policy with ZSG both theoretically and empirically\\\". The findings in Section 6 suggest that you do not find the near-optimal policy in your empirical setting.\", \"Figure 2 is tricky to read, and lacks a y-axis label.\", \"Line 122: \\\"successive feature\\\" -> \\\"successor features\\\", and Touati et al. (2023) do study zero-shot generalisation in offline RL, but to unseen reward functions rather than unseen environments. Similar works that are not cited include [1,2]\", \"Section 3 should be titled \\\"Preliminaries\\\".\", \"In Line 91 you miss Section 2 from your discussion of the rest of the paper.\", \"When reporting empirical results I would recommend following the guidance of [3] and use IQMs, confidence intervals obtained via bootstrapping etc.\", \"Line 355: \\\"becomes\\\" -> \\\"become\\\".\", \"Line 360: remove \\\"it\\\".\", \"It would be helpful if the code to reproduce the experiments was linked/provided.\", \"**References**\", \"[1] Park, S., Kreiman, T., and Levine, S. (2024). Foundation policies with hilbert representations. International Conference on Machine Learning.\", \"[2] Jeen, S., Bewley, T., and Cullen, J. M. (2024). Zero-shot reinforcement learning from low quality data. Advances in Neural Information Processing Systems 38.\", \"[3] Agarwal, R., Schwarzer, M., Castro, P. S., Courville, A. C., and Bellemare, M. (2021). Deep reinforcement learning at the edge of the statistical precipice. Advances in neural information processing systems, 34:29304\\u201329320\"], \"questions\": [\"In practice, why does IQL-4V not outperform BC in Section 6? It would be helpful if you could discuss more thoroughly the limitations in translating the theoretical findings to practical application.\", \"As far as I understand, IQL-nV trains $n$ value functions, and $n$ policies (one w.r.t. each value function), but in practice $n$ is less than the number of test contexts. At test-time, how do you select which of the $n$ policies to use for an arbitrary unseen context?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for your response\", \"comment\": \"Hi authors, thanks for your response and for the new results. I've re-read the paper with a fresh set of eyes, and have considered your responses to me and other reviewers.\\n\\nI'm now more convinced that, for the purposes of this paper, you algorithms do not need to broadly outperform BC, and the improvement over vanilla IQL that you show justifies your theoretical contributions earlier in the paper. \\n\\nI would encourage you to include these new results on the highly suboptimal datasets in an updated version of Table 2, and tone down language about \\\"generally outperforming BC\\\", and instead focus more on direct comparisons with vanilla IQL.\", \"a_new_thought_that_entered_my_mind_on_the_re_read\": \"what happens as you push $n$ beyond 8 in Table 3? Should we expect performance to improve further? I appreciate it is not computationally efficient, but presumably you could show better empirical performance by aggregating your value functions over fewer environments (i.e. get closer to a point where you maintain independent value functions for each environment)? I'd be keen to hear your response, but adding this study to the paper is not vital.\\n\\nGiven my reassessment, I'm happy to update my score from 5 to 6.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thank you for your thoughtful follow-up question.\\n\\n**Q** Offline RL should be more competitive to BC for the mixed data setting. Why is it not better?\\n\\n**A**\\nWe appreciate your interest in understanding the nuanced comparison between our method and BC in the mixed setting. However, we would like to highlight that the assertion \\\"offline RL will become more competitive with mixed-quality data\\\" **is not always true**. Below, we address your concerns from both theoretical and empirical perspectives:\\n\\n\\n**Theoretical Perspective:**\\n\\n\\n\\nWe first provide a theoretical illustration of how the performance gap between offline RL and BC evolves with respect to the **data coverage number** (reflecting the quality of the behavior policy) in the single-task setting. Since the multi-task setting generalizes the single-task setting, our observation can also be extended to the multi-task ZSG setting. \\n\\nWe follow the setup in [1]. For the single-environment setting, let $C^* = \\\\max_{s, a, h} \\\\frac{d_h^{\\\\pi^*}(s, a)}{d_h^{\\\\pi^b}(s, a)}$\\nquantify the data coverage of the behavior policy relative to the optimal policy which reflects the data quality. The higher $C^*$, the worse the offline data quality.\\n\\nThe suboptimality gap of BC is (ignoring logarithmic factors) $\\\\frac{H(C^* - 1)}{2} + \\\\frac{SH}{K}$,\\nwhere $S$ is the number of states, $H$ is the planning horizon, and $K$ is the number of trajectories. For offline RL methods, the suboptimality gap is $\\\\sqrt{\\\\frac{C^* S H}{K}} + \\\\frac{C^* S H}{K}$. Thus, the performance difference between offline RL and BC is: \\n$$\\nO(C^* H (1 - \\\\frac{S}{K}) - \\\\sqrt{\\\\frac{C^* S H}{K}} - \\\\frac{H}{2} + \\\\frac{SH}{K}). \\n$$\\n\\nThe key insight is that the above gap is **not always an increasing function** w.r.t. $C^*$. For example, when $1-\\\\frac{S}{K} = \\\\frac{1}{H^2}$, the above gap becomes\\n\\n$$\\nO(C^*/H- \\\\sqrt{C^*H(1-1/H^2)}+H-1/H). \\n$$\\n\\nWe set $C^* = H$ and $C^* = H^2$ to represent offline data with near-expert quality and mixed quality, respectively. Then the above gap becomes\\n$$\\n\\\\text{expert: }O(1)>0, \\\\text{mixed: }O(H-H^{1.5})<0.\\n$$\\n\\nThis example implies that with a mixed-quality dataset, it is possible for the performance gap between BC and offline RL to increase slightly as $C^*$ grows from the expert quality dataset. \\n\\n[1] When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning? ICLR 2022.\\n\\n**Empirical Perspective:**\\n\\nEmpirically, to further support our argument, we conducted an additional evaluation on the full Procgen benchmark, consisting of 16 games. This evaluation utilized a *suboptimal dataset* instead of the *mixed expert-suboptimal dataset* where $C^*$ becomes even larger. To construct the offline dataset, we extracted 1 million transitions from the *25M suboptimal dataset* provided by Mediratta et al. (2023) for each Procgen game, following the same steps in Mediratta et al. (2023). For the evaluation, we compared the performance of the IQL-4V, IQL and BC algorithms, adhering to the same practices outlined in our original paper. The results of the evaluation are as follows:\\n\\n|Procgen game| IQL-4V | IQL|BC |\\n|---------------|------------------|------------------|---------------------------|\\n|Bigfish| $3.03\\\\pm 0.96$ |$1.77\\\\pm 0.06$| $1.73 \\\\pm 0.14$ |\\n|Bossfight| $1.04\\\\pm 0.20$|$0.91\\\\pm 0.12$| $1.06\\\\pm 0.13$|\\n|Caveflyer| $2.01\\\\pm 0.41$|$1.63\\\\pm 0.20$| $1.47\\\\pm 0.14$|\\n|Chaser| $0.42\\\\pm 0.05$|$0.48\\\\pm 0.01$| $0.46\\\\pm 0.01$|\\n|Climber| $1.07\\\\pm 0.06$| $1.02\\\\pm 0.01$| $1.07\\\\pm 0.16$|\\n|Coinrun| $2.10\\\\pm 1.00$| $2.80\\\\pm0.60$|$2.00\\\\pm 0.10$|\\n|Dodgeball| $0.72\\\\pm 0.20$|$0.72\\\\pm 0.16$| $0.61\\\\pm 0.09$|\\n|Fruitbot| $-1.06\\\\pm 0.07$|$-0.16\\\\pm 0.06$| $-2.53\\\\pm 0.02$|\\n|Heist| $0.80\\\\pm 0.01$|$0.65\\\\pm 0.15$| $0.30\\\\pm 0.01$|\\n|Jumper| $1.70\\\\pm 0.10$|$ 1.35\\\\pm 0.35$| $1.20\\\\pm 0.10$|\\n|Leaper| $4.10\\\\pm 0.10$|$3.75\\\\pm 0.05$| $3.40\\\\pm 0.30$|\\n|Maze| $1.25\\\\pm 0.45$|$1.25\\\\pm 0.25$|$1.30\\\\pm 0.40$|\\n|Miner| $0.14\\\\pm 0.01$|$0.12\\\\pm 0.02$| $0.15\\\\pm 0.03$|\\n|Ninja| $1.20\\\\pm 0.10$|$1.15\\\\pm 0.15$| $1.35\\\\pm 0.15$|\\n|Plunder| $2.95\\\\pm 0.80$|$2.26\\\\pm 0.30$| $2.63\\\\pm 0.04$|\\n|Starpilot| $4.20\\\\pm 0.12$|$3.89\\\\pm 0.32$| $4.55\\\\pm 0.28$|\\n|Mean|$-0.155\\\\pm 0.047$|$-0.163\\\\pm 0.040 $|$-0.179\\\\pm 0.042$|\\n|Median|$-0.088\\\\pm 0.074$|$-0.092\\\\pm 0.063$|$-0.074\\\\pm 0.066$|\\n|IQM|$-0.079\\\\pm 0.016$|$ -0.099\\\\pm 0.022$|$-0.108\\\\pm 0.022$|\\n\\nThe results demonstrate that IQL-4V generally outperforms BC for **highly suboptimal data**. Combined with our experimental findings, this supports our theoretical claim that the notion \\\"offline RL will always become more competitive compared to BC as data quality decreases\\\" is not universally true. \\n\\nWe hope this response addresses any concerns the reviewers may have. Please do not hesitate to reach out with any further questions or comments.\"}",
"{\"summary\": \"The authors address the problem of zero-shot offline generalisation, where an algorithm is provided with a number of datasets from some source environments and aims to train a policy that performs as good as possible in a set of target environments from which it has never seen any data.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors propose a model-based (PERM) as well as a model-free algorithm (PPPO), which to the best of my knowledge are the first approaches to exhibit zero-shot generalisation capabilities with a proved bound on their suboptimality. They also provide empirical results on a number of environments & analyse why previous offline algorithms fail in the zero-shot generalisation setting without context. Personally, I believe this area to be crucially important for practical applications and believe it to deserve further investigation as it is so far underexplored, and I thus very much welcome the authors' work.\", \"weaknesses\": [\"I have some questions about the empirical evaluation:\", \"could you provide uncertainty estimates also for mean & median performance? Currently it is hard to judge whether the mean improvement from e.g. BC to IQL-4V on expert is significant or not.\", \"generally, I am surprised that BC performs so well - not only on the expert datasets where it could be expected, but also on the mixed datasets. Do you have a theory about why that is the case? Overall BC seems to outperform the proposed method in terms of median & mean performance (except for mean on expert, but it is unclear whether that is significant). With that in mind, it is questionable whether in practice one would not rather use the much simpler, easier to interpret & implement BC method instead of the proposed solution. Generally, the authors don't comment much on BC in the corresponding section and I think this point needs some further discussion.\", \"if I am not mistaken, figure 2 shows the differences in performance between IQL baseline and the newly proposed approach (?). Please add that information in the caption, it currently only says difference, but not between what.\", \"furthermore, I understand that not much prior work in the offline zero-shot area exists, however as far as I understand (please correct if I am mistaken) your baseline IQL is overly simple, i.e. it treats all stages as belonging to the same MDP. An improvement over this behavior should be fairly simple when taking into account context information. I believe you could show the merits of your method in a much more convincing way of you chose a baseline that would do this - e.g. use a meta learning approach and don't allow it to fine-tune on the target task (meta learning does take into account context information but I would still expect your proposed method to outperform it)\"], \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you for the comments (Part III)\", \"comment\": \"**Q9** Is the stochastic policy suggested in this work or is it a variant from previous works?\\n\\n**A9** The use of a stochastic policy has indeed been considered by Mediratta et al. (2023) as an important hyperparameter configuration. However, their application of the stochastic policy was limited to BC evaluation and was not utilized for IQL evaluation. In contrast, our work specifically suggests employing a stochastic policy for IQL with multiple value networks. This recommendation is based on our observation that it significantly enhances overall performance on expert datasets.\\n\\n\\n**Q10** It would be more fair to compare to the 1V version which gets additional parameters to roughly match the ones of nV, n>1.\\n\\n**A10** Thank you for your insightful suggestion. Based on your feedback, we conducted an additional ablation study to explore your proposed scenario, where we implemented IQL-1V with a value network scaled to four times its original size. Namely, we increase the hidden dimension of the fully-connected layers of the value network from 256 to 1024 in the IQL-1V setting, ensuring that the total number of critic parameters matches that of IQL-4V. Below are the results for the Miner Expert dataset:\\n\\n| Configuration | 4V-SP (256 hidden dim) | 1V-SP (256 hidden dim) | 1V-SP (1024 hidden dim) |\\n|--------------------------|---------------------------|---------------------------|---------------------------|\\n| Performance | $6.36 \\\\pm 1.85$ | $5.6 \\\\pm 1.89$ | $2.18 \\\\pm 1.05$ |\\n\\nInterestingly, the results indicate that increasing the model size for the single value network (1V-SP with 4M parameters) actually degrades performance compared to the original IQL-1V setup. We hypothesize that this decline is likely due to overfitting caused by scaling the network without additional structural changes. This observation highlights the practical advantages of our multi-value network approach, which achieves better performance while mitigating overfitting risks. \\n\\n**Q11** Methods making use of ensembles of value networks have indeed been used to combat value overestimation issues in various works, any thoughts on parallels between these lines of work and the current one?\\n\\n\\n**A11** Thank you for bringing up these works. It is worth noting that overestimation bias, as addressed in the cited studies, is a significant challenge in single-task reinforcement learning. While these methods focus on mitigating estimation bias, our use of an ensemble of value networks serves a complementary purpose. Specifically, we leverage the ensemble approach to enhance generalization across tasks, ensuring robust performance in diverse and challenging settings.\\n\\n**Q12** Typos\\n\\n**A12** We have fixed them in revision. \\n\\nAgain, we sincerely thank you for your time and constructive feedback. We hope our responses address your concerns, and we welcome any further questions or suggestions.\"}",
"{\"comment\": \"Thank you for your valuable comments. With the ICLR rebuttal phase deadline approaching, we would greatly appreciate any additional feedback or concerns you may have.\"}",
"{\"title\": \"Thank you for the comments (Part II)\", \"comment\": \"**Q5** How meaningful is the proposed objective of comparing to the best Markovian policy, and could a worst-case bound over contexts or an alternative setting involving history-dependent policies provide a more appropriate benchmark?\\n\\n**A5** First, we address this concern in Remark 1 (Lines 168\\u2013175), where we compare our setting with POMDPs and history-dependent policies. Our choice to focus on Markovian policies is motivated by their simplicity and practicality. The goal of this work is to provide a generalizable framework that extends existing offline RL methods, which primarily focus on Markovian policies (e.g., IQL), to address generalization across contexts. While history-dependent policies are an interesting and potentially powerful direction, they fall outside the scope of this work.\\n\\nSecond, regarding worst-case bounds, we clarify in Remark 2 (Lines 176\\u2013183) that such bounds are not achievable in the zero-shot generalization (ZSG) setting. This assertion is supported by prior work (Ye et al., 2023), which establishes lower bounds for the online RL setting. Since the offline RL setting is inherently more challenging, these lower bounds hold for our scenario as well. Consequently, our approach focuses on achievable objectives within the ZSG framework.\\n\\nFinally, we are indeed studying the setting suggested in your comment, where context information is not accessible during evaluation and may not be explicitly included in the offline data. The only assumption we make is that the context distribution remains the same between training and evaluation. History-dependent policy could be more powerful, but it is beyond the scope of our current work and could be an exciting direction for future research.\\n\\n**Q6** The presence of an oracle for each individual dataset seems like a strong assumption. Could you elaborate on how one could implement the oracle and what kind of estimates would be feasiable for $\\\\Gamma(s,a)$\\n\\n**A6** First, the assumption of an oracle for each individual dataset aligns with prior theoretical work in offline RL, such as Jin et al. (2021).\\n\\n\\nSecond, for linear MDPs, we provide an explicit formula for instantiating the oracle, as detailed in Eq. (29) in Appendix D. This demonstrates how the oracle can be practically implemented in specific cases.\\n\\nFinally, for general non-linear MDPs, we discuss feasible implementations in Remark 7 (Lines 281-284). A practical approach is to use bootstrapping techniques to estimate uncertainty, as in the Bootstrapped DQN method (Osband et al., 2016). We note that when the bootstrapping method is straightforward to implement, the assumption of having access to an uncertainty quantifier is reasonable.\\n\\n\\n\\n\\n**Q7** For the counterexample CMDP in section 4, it seems unreasonable to expect the agent to do well on actions that are never observed. There would be no observed rewards for those actions so the agent has no information.\\n\\n**A7** We apologize for the confusion. First, we would like to clarify that the two MDP graphs in Figure 1 indeed share the same state-action space (i.e., they are components ${u,v}$ of the same cMDP). As a result, the data distribution $\\\\mu$ effectively covers all the actions in the underlying MDP. Second, we emphasize that the data distributions $\\\\mu_u$ and $\\\\mu_v$ are derived from the near-optimal policy within each context. These distributions are almost entirely skewed toward the actions with maximum rewards in their respective contexts, which satisfies our assumptions and provides the agent with sufficient information about the maximum reward policies.\\n\\n\\n\\n**Q8** Why does Table 2 report the deterministic policy result for Miner instead of the stochastic policy variant, and would this not more fairly reflect the impact of stochastic versus deterministic policies relative to the additional value networks?\\n\\n\\n**A8** We did not specifically optimize hyperparameters, including whether to use a stochastic or deterministic policy, for each individual game in Procgen due to the computational cost. Instead, we used a consistent set of hyperparameters across all games. While a stochastic policy performs better for the Miner dataset, it yields suboptimal performance when considering the entire collection of games. This is why, for the expert dataset, we adopt the stochastic policy to maximize overall performance, as detailed in Table 4, which outlines our hyperparameter selection process.\\n\\nRegarding the difference between stochastic and deterministic policies, we emphasize that the choice is case-dependent. For instance, we use a deterministic policy for the mixed dataset, as shown in Table 4. Additionally, with respect to the number of value networks, Table 2 demonstrates that IQL-4V outperforms IQL with a deterministic policy for the mixed dataset, highlighting the utility of increasing the number of value networks.\"}",
"{\"title\": \"Thanks for your detailed response and updates\", \"comment\": \"Thanks for your detailed response and for the effort that has gone into changing the manuscript. I'll respond directly to A1&A2 which address my core concerns.\\n\\nI appreciate that we should expect BC to be a strong baseline when trained on expert trajectories, and I do not necessarily expect that your proposals should outperform it in that setting. But I would expect your method to outperform BC on the Mixed dataset, and it doesn't in aggregate, as I wrote in my original review. My question is: in what empirical settings should we expect your method to outperform BC? It seems you have done the required theoretical analysis to establish this, but you haven't been able to show this empirically. I appreciate this is largely a theory paper, but I feel this is an important empirical demonstration to include, and the paper currently lacks it.\"}",
"{\"comment\": \"Thank you for the detailed response.\\n\\nI have to say I am unsure what to make out of it. I would agree that offline RL doesn't always have to be better than BC in the mixed data setting. If it were only about a few datasets one could easily argue that this is not an issue. However, we are talking about 3 times (!) better median score in the mixed setting & still better median score on the \\\"highly suboptimal data\\\", i.e. putting it in more drastic terms, even if the behavior policy is garbage, simply cloning it gives better (or equal, due to the uncertainty) median performance than the considered offline RL methods (I am thus also confused by the statement \\\"IQL-4V generally outperforms BC for highly suboptimal data\\\"). To me these results are highly surprising and at least to the best of my knowledge, this is not a universally expected situation in prior offline RL literature either - of course situations exist where BC makes more sense, but those are usually more on the better data quality side. Since you are tackling the offline setting, where one cannot test which method performs better, and a practitioner thus has to choose one method from the start, it seems your data is implying that BC should be always favoured over offline RL no matter what data at hand. In the current manuscript, I think this whole topic is not mentioned at all, which I believe to be problematic due to the surprising & unintuitive results.\"}",
"{\"title\": \"Thank you for the detailed response\", \"comment\": \"I thank the authors for their detailed response to my questions and changes made to the manuscript.\\n\\nEven though I understand that your primary goal is not to show that the proposed method always outperforms BC, I would still have a follow-up question regarding that baseline:\\n\\nYou say that when the behavior policy is suboptimal, the comparison between BC and offline RL becomes more nuanced. I would absolutely agree, however wouldn't you then expect a much more competitive performance of your proposed method in the Mixed setting? It seems to only achieve 1/3 - 1/2 of the performance, depending on what metric I look at. When would you propose someone should use the method if BC can be reasonably expected to yield much better results?\\n\\nGenerally I see a value in algorithms that are well grounded in theory. It usually means one has a better understanding of what is going on. Do you maybe have an explanation for what we are seeing - why would you say is it not better?\"}",
"{\"summary\": \"This paper proposes Pessimistic Empirical Risk Minimization (PERM) and Pessimistic Proximal Policy Optimization(PPPO), which they leverage a pessimistic policy evaluation component, aiming to address ZSG by minimizing a \\\"suboptimality gap,\\\" that combines supervised learning error (related to policy generalization) and reinforcement learning error (related to dataset coverage).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper provides theoretical bounds on the suboptimality of both PERM and PPPO. This theoretical rigor is a positive contribution to ZSG.\", \"The proposed algorithms consider multiple environments separately, potentially improving ZSG by better capturing variations across contexts.\"], \"weaknesses\": [\"Applying a pessimistic bias to counteract distributional shifts can lead to over-conservative policies that might underperform, particularly in environments where high-risk, high-reward actions are necessary.\", \"Pessimism may sometimes hinder the exploration of higher-reward policies due to its inherent cautious nature. This paper\\u2019s methods also assume that each environment's context information and datasets are either directly available or accurately inferable, which limits the use cases from random sampled or mixed quality datasets.\", \"The suboptimality bounds in the paper rely on having a large number of sufficiently varied environments. These bounds may be loose in situations with fewer or more similar environments, and the policy may perform poorly in unseen contexts.\", \"Both PERM and PPPO involve parameters that adjust the degree of pessimism applied, where the sensitivity of them are not fully discussed.\", \"PERM requires maintaining separate models or critic functions for each environment in the training set, which can quickly become computationally expensive as the number of environments grows. PPPO, while model-free, still requires multiple policies for different environments, which limits scalability when working with diverse and high-dimensional data.\", \"The selection of the baseline methods are not justified, why not compare the proposed methods with SOTA methods?\"], \"questions\": \"please see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper studies zero-shot generalization in offline RL, proposing pessimistic algorithms and providing theoretical analysis along with empirical validation on the Procgen benchmark. While the theoretical framework contributes to understanding generalization in offline RL, the core technical approach of using pessimism for uncertainty quantification is fairly standard, and empirical results do not demonstrate clear advantages over simpler baselines like behavioral cloning.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion, reviewers raised several key concerns: (1) lack of empirical evidence that the proposed methods outperform behavioral cloning (BC), even on mixed and suboptimal datasets where offline RL should theoretically have an advantage, (2) overly complex analysis and notation that makes the practical implications unclear, and (3) reliance on pessimistic value estimation as the core technical approach, which is a well-established technique in offline RL. These concerns are not well addressed. While the authors emphasized that \\\"the primary goal of this paper is not to propose a state-of-the-art algorithm that always outperforms existing methods like BC across all settings\\\", I feel that the reviewers' concern is well-grounded. Moreover, it seems that the algorithms implemented by the authors are not those studied in theory. Overall, these concerns lead to the decision.\"}",
"{\"comment\": \"Thanks for your response. We address your concerns as follows.\\n\\n**Q**: I am thus also confused by the statement \\\"IQL-4V generally outperforms BC for highly suboptimal data\\u201d\\n\\n**A**: To clarify, our statement \\u201cIQL-4V generally outperforms BC for highly suboptimal data\\u201d is based on the following observations: in experiments on the suboptimal dataset, IQL-4V outperforms BC in 9 out of 16 games, while BC outperforms IQL-4V in only 6 out of 16 games, with one tie. We emphasize that the experimental setting spans multiple independent games, each with varying levels of difficulty. It is crucial to evaluate performance across all games rather than relying solely on summary statistics like the median. Moreover, metrics such as the mean and IQM (Interquartile Mean) also indicate that IQL-4V outperforms BC. Therefore, we believe the statement \\u201cBC should always be favored over offline RL\\u201d is not accurate in this context.\\n\\n**Q**: Even if the behavior policy is garbage, simply cloning it gives better (or equal, due to the uncertainty) median performance than the considered offline RL methods. To me these results are highly surprising and at least to the best of my knowledge, this is not a universally expected situation in prior offline RL literature either - of course situations exist where BC makes more sense, but those are usually more on the better data quality side.\\n\\n**A**:While the impression that \\u201csituations where BC makes more sense are usually on the better data quality side\\u201d holds in classical RL settings, we respectfully disagree that this observation applies to offline RL with generalization. In fact, limited prior work, such as Mediratta et al. (2023), has demonstrated that standard offline RL methods often perform significantly worse than BC in both expert and mixed data settings. Notably, the performance gap is even larger in the mixed data setting, which contrasts with the conventional expectation. Therefore, we believe that this observation does not hold in the offline RL with generalization setting, where BC can exhibit competitive or superior performance even with lower-quality data.\\n\\n\\n**Q**: Since you are tackling the offline setting, where one cannot test which method performs better, and a practitioner thus has to choose one method from the start, it seems your data is implying that BC should be always favoured over offline RL no matter what data at hand. In the current manuscript, I think this whole topic is not mentioned at all, which I believe to be problematic due to the surprising & unintuitive results.\\n\\n**A**: We want to highlight that the primary focus of our paper is **not** to propose an algorithm that universally outperforms BC in the challenging multi-environment generalization setting. Instead, our primary contribution lies in being the first to theoretically study zero-shot generalization in offline RL and to design theoretically sound offline RL algorithms with strong generalization ability. Our experiments demonstrate that applying our theoretical analysis-inspired frameworks can significantly improve the performance of baseline offline RL methods. Designing offline RL algorithms that consistently outperform BC remains a promising yet challenging direction for future research, but it is beyond the scope of this paper.\"}",
"{\"title\": \"Thank you for the comments (Part I)\", \"comment\": \"We sincerely appreciate your thoughtful feedback and positive remarks about our theoretical contributions and algorithms. Below, we address your concerns and provide clarifications.\\n\\n**Q1** Applying a pessimistic bias to counteract distributional shifts can lead to over-conservative policies that might underperform, particularly in environments where high-risk, high-reward actions are necessary.\\n\\n**A1** We have some discussions in Remark 8 about why pessimism could help generalization in our ZSG setting. In our framework, pessimism can indeed facilitate generalization, rather than hinder it. Specifically, we employ pessimism to construct reliable Q functions for each environment individually. This approach supports broader generalization by maintaining multiple Q-networks separately. By doing so, we ensure that each Q function is robust within its specific environment, while the collective\\nset of Q functions enables the system to generalize across different environments. Furthermore, our theoretical results demonstrate that the proposed pessimistic approach balances caution and generalization effectively, making it well-suited for ZSG.\\n\\n\\n\\n**Q2** This paper\\u2019s methods also assume that each environment's context information and datasets are either directly available or accurately inferable, which limits the use cases from random sampled or mixed quality datasets.\\n\\n**A2** We respectfully believe there may be a misunderstanding regarding our assumptions. In our setting, context information is required only during training on the offline dataset and is not available during evaluation, nor is it assumed to be perfectly inferable. Furthermore, the offline dataset does not need to include exact context variables\\u2014only labels that distinguish trajectories collected from different environments ($1, 2, \\\\dots, n$). These labels enable differentiation across environments without requiring explicit context values or high-quality datasets, thereby broadening the applicability of our approach.\\n\\n\\n\\n**Q3** The suboptimality bounds in the paper rely on having a large number of sufficiently varied environments, and may be loose in situations with fewer or more similar environments, and the policy may perform poorly in unseen contexts.\\n\\n**A3** We agree that the suboptimality bounds depend on the number of environments (n). Specifically, the $I_1$ terms in our bounds scale with $\\\\sqrt{1/n}$, where $n$ is the number of different contextual MDPs with the context drawn from the same distribution $C$. Note that dependency is statistically unavoidable and aligns with standard generalization bounds in supervised learning, where generalization performance improves with more diverse training samples. Intuitively, if the number of samples is too small, we can not expect good generalization results.\\n\\n\\n**Q4** Both PERM and PPPO involve parameters that adjust the degree of pessimism applied, where the sensitivity of them are not fully discussed.\\n\\n**A4** Thank you for this valuable suggestion. The degree of pessimism in our approach is controlled by the uncertainty quantifier $\\\\Gamma$, as defined in Definition 5. A larger $\\\\Gamma$ corresponds to a greater degree of pessimism. Ideally, $\\\\Gamma$ should be chosen as the smallest value that satisfies the inequality in Line 266.\\n\\nThe impact of $\\\\Gamma$ is reflected in the $I_2$ terms in Theorems 9 and 14, where the suboptimality gap scales with $\\\\Gamma$. This indicates that an overly pessimistic uncertainty quantifier may degrade performance by unnecessarily increasing the suboptimality gap. Conversely, an appropriately tuned $\\\\Gamma$ balances caution and generalization, ensuring reliable policy evaluation and improved performance.\\n\\n\\n**Q5** PERM requires maintaining separate models or critic functions for each environment in the training set, which can quickly become computationally expensive as the number of environments grows. PPPO, while model-free, still requires multiple policies for different environments, which limits scalability when working with diverse and high-dimensional data.\\n\\n**A5** Thank you for highlighting these concerns. In our paper, we address the practical challenges of the theoretical algorithms with specific mitigations. Practically, we propose merging multiple data splits into a shared context, which leads to the development of the IQL-nV algorithm for our empirical evaluations (see Line 477 and subsequent discussion). Theoretically, we establish rigorous bounds for both PERM and PPPO after merging the datasets, as detailed in Remark 12, Remark 15, and Appendix C of the paper. These measures ensure that our approach remains both computationally feasible and scalable.\"}",
"{\"title\": \"Kind Request for Feedback\", \"comment\": \"Dear Reviewer 15Pv,\\n\\nThank you for your time and thoughtful suggestions on our work. We hope our detailed responses and clarifications have addressed your concerns. With the final ICLR rebuttal deadline approaching in less than 7 hours, we would greatly appreciate any additional feedback or concerns you might have. Your insights would be extremely helpful in refining our submission.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"comment\": \"I thank the authors for their response. I'd like to keep my original rating.\"}",
"{\"comment\": \"Dear Reviewer 15Pv,\\n\\nThank you for your time and thoughtful comments on our work. As the final ICLR rebuttal phase deadline approaches, we would greatly value any additional feedback or concerns you may wish to share.\\n\\nBest,\\\\\\nThe Authors\"}"
]
} |
EBBeSbmAyh | Towards Constraint-aware Learning for Resource Allocation in NFV-enabled Networks | [
"Tianfu Wang",
"Long Yang",
"Chao Wang",
"Chuan Qin",
"Liwei Deng",
"Li Shen",
"Hui Xiong"
] | Virtual Network Embedding (VNE) is a challenging combinatorial optimization problem that refers to resource allocation associated with hard and multifaceted constraints in network function virtualization (NFV). Existing works for VNE struggle to handle such complex constraints, leading to compromised system performance and stability. In this paper, we propose a \textbf{CON}straint-\textbf{A}ware \textbf{L}earning framework for VNE, named \textbf{CONAL}, to achieve efficient constraint management. Concretely, we formulate the VNE problem as a constrained Markov decision process with violation tolerance. This modeling approach aims to improve both resource utilization and solution feasibility by precisely evaluating solution quality and the degree of constraint violation. We also propose a reachability-guided optimization with an adaptive reachability budget method that dynamically assigns budget values. This method achieves persistent zero violation to guarantee the feasibility of VNE solutions and more stable policy optimization by handling instances without any feasible solution. Furthermore, we propose a constraint-aware graph representation method to efficiently learn cross-graph relations and constrained path connectivity in VNE. Finally, extensive experimental results demonstrate the superiority of our proposed method over state-of-the-art baselines. Our code is available at \href{https://anonymous.4open.science/r/iclr25-conal}{https://anonymous.4open.science/r/iclr25-conal}. | [
"Network Resource Allocation; Combinatorial Optimization; Reinforcement Learning; Graph Neural Network"
] | Reject | https://openreview.net/pdf?id=EBBeSbmAyh | https://openreview.net/forum?id=EBBeSbmAyh | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"sPfCJwFmYc",
"sO1CazwQu5",
"rtuRbpJp8o",
"o8ulGvjBaS",
"o43Imr8wfX",
"dfUZh5hgHP",
"cscLEq7Bq1",
"ZfTGJ9AR81",
"WonscussFo",
"RrYEBVjCWH",
"Q5x3NiupXE",
"PItzQIxWoV",
"M2DVHmqc0X",
"LIlCTpsnOK",
"Kb7q5U4V2w",
"HJ3tFyEE2y",
"GTyyzwCp29",
"EJAhHIj0bQ",
"DtX0QwrP6z",
"CYbVdkYpjc",
"AII90ZJ7nB",
"8zghZo8F3c",
"8sujUx1toR",
"7QzEbBLr9e",
"5odEpJVdH3",
"3OIM7QaKFi",
"2SBjeGyoNd",
"2522Pcm3m6",
"01iTTBe2IO"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1737523713316,
1732443625028,
1732443517937,
1732660993773,
1732686666562,
1732443878142,
1730649803676,
1732731644233,
1732444088913,
1732443778863,
1733129629808,
1732976728538,
1732783540478,
1732444262823,
1732741848944,
1732443682532,
1732741980233,
1730114361855,
1732546726232,
1734726592141,
1733129560971,
1732444341042,
1730360965838,
1732548329051,
1732464968371,
1732443190011,
1732546544501,
1732444156724,
1730594448281
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Reviewer_9yrG"
],
[
"ICLR.cc/2025/Conference/Submission5553/Reviewer_z4vG"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Reviewer_S9pS"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Reviewer_z4vG"
],
[
"ICLR.cc/2025/Conference/Submission5553/Reviewer_iEV6"
],
[
"ICLR.cc/2025/Conference/Submission5553/Area_Chair_EM2s"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Reviewer_iEV6"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Reviewer_iEV6"
],
[
"ICLR.cc/2025/Conference/Submission5553/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5553/Reviewer_9yrG"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Author Response to W5 & Q2\", \"comment\": \"> **W5**: Given the rapid development of NFV, some relevant literatures are missing to be discussed, e.g., NFVdeep: Adaptive Online Service Function Chain Deployment with Deep Reinforcement Learning, iwqos\\u201919; Adaptive VNF Scaling and Flow Routing with Proactive Demand Prediction, infocom\\u201918; FlexNFV: Flexible Network Service Chaining with Dynamic Scaling, network\\u201919; Joint Optimization of Chain Placement and Request Scheduling for Network Function Virtualization, icdcs\\u201917, etc.\\n\\nThank you for your suggestion. In the revised manuscript, we have expanded the Related Work section to include additional relevant literature, such as NFVdeep, Adaptive VNF Scaling, FlexNFV, JointOptimization, and more. The added discussions are as follows:\\n- Lines 814-815 on Page 16: Resource management is a critical research direction in NFV, including tasks such as Scaling (Fei et al., 2018; 2020) and scheduling (Zhang et al., 2017). Among these, VNE plays a key role in resource allocation.\\n- Lines 822-823 on Page 16: such as node ranking strategies (Zhang et al., 2018; Gong et al., 2014; Fan et al., 2023) (Jin et al., 2020)\\n- Lines 825-826 on Page 16: many RL-based VNE algorithms have been proposed (Haeri & Trajkovi\\u00b4c, 2017; Wang et al., 2021; Zhang et al., 2022; He et al., 2023a; Zhang et al., 2023b; Geng et al., 2023) (Xiao et al., 2019).\\nThese updates are highlighted in blue for clarity in the revised manuscript.\\n\\n> **Q2**: The paper does not provide a robust method for detecting infeasible instances, nor does it implement any controls or mitigations for the growth of \\u03bb. This oversight means that, when faced with infeasible instances, the model may fail to operate correctly.\\n\\nThank you for your feedback. Detecting infeasible instances is computationally infeasible for NP-hard problems like VNE. Instead, our ARB approach dynamically adjusts reachability budgets, effectively mitigating the impact of unsolvable instances without explicit detection. This method is proposed to prevent \\u03bb from diverging, ensuring training stability and maintaining policy performance across various scenarios.\\nWe hope that these responses address your concerns and clarify the contributions and robustness of our method. Your feedback has been invaluable in improving the quality of our work, and we thank you once again for your thoughtful review.\\n\\nAgain, we thank the reviewer for your in-depth suggestions for improving our submission. We hope the responses address your concerns. Thank you for your time and consideration.\"}",
"{\"title\": \"Author Response to W3-4\", \"comment\": \"> **W3**: The integration of virtual and physical networks into a heterogeneous graph with numerous cross-graph links can lead to information redundancy and noise. Noise from irrelevant links can hinder the model's ability to learn meaningful representations.\\n\\nThank you for your feedback. VNE inherently involves embedding the cross-graph status between the virtual and physical networks, as the relationships between the two graphs play a crucial role in determining feasible and optimal solutions. To effectively capture these cross-graph relationships, we employ a heterogeneous graph modeling approach [a], which offers significant advantages over independently processing each graph with separate GNNs. Without cross-graph modeling, crucial relational information between the virtual and physical networks would be lost, limiting the model\\u2019s ability to learn representations that account for these dependencies.\\n\\nIn our design, the addition of cross-graph links is not arbitrary but carefully constructed to reflect explicit semantic relationships between the nodes in the two graphs. Specifically, we introduce two types of heterogeneous links, each with clear semantics: one type connects virtual nodes to their potential physical hosts, and the other captures the constraints related to path connectivity between physical nodes. These explicitly defined links ensure that the model focuses on meaningful interactions, avoiding the introduction of random or noisy connections that could obscure learning. Additionally, the number of cross-graph links introduced is the same as the number of physical nodes, $N_p$, ensuring that the graph remains manageable in size and does not suffer from excessive redundancy.\\n\\nHeterogeneous graph modeling has been widely adopted in other domains with multiple interacting graphs, such as entity alignment in knowledge graphs [b], graph matching in computer vision [c], and optimizing Mixed-Integer Linear Programs [d], all of which demonstrate the effectiveness of this approach in learning meaningful cross-graph representations. Similarly, in our framework, the heterogeneous graph not only captures the nuanced relationships between the virtual and physical networks but also enhances the semantic richness of the learned embeddings, ultimately improving the performance of the VNE task.\\n\\n[a] Chuxu Zhang, et al. Heterogeneous Graph Neural Network. KDD, 2019\\n\\n[b] Jia-Chen Gu, et al. RHGN: Relation-gated Heterogeneous Graph Network for Entity Alignment in Knowledge Graphs. ACL, 2023.\\n\\n[c] Runzhong Wang, et al. Neural Graph Matching Network: Learning Lawler's Quadratic Assignment Problem with Extension to Hypergraph and Multiple-graph Matching. TPAMI, 2022\\n\\n[d] Ziang Chen, et al. On Representing Mixed-Integer Linear Programs by Graph Neural Networks. ICLR, 2023.\\n\\n> **W4**: The experiments are mainly conducted on simulated environments and limited network topologies (e.g., GEANT and BRAIN). This may not adequately demonstrate the model's performance.\\n\\nThank you for your feedback. We conducted experiments on simulated environments following most existing studies in this direction. In the submission, we conducted the experiments in both simulated and real-world topologies. Specifically, we evaluated CONAL on WX100, WX500, GEANT, and BRAIN, covering a wide range of scalability and network densities:\\n\\n| Network | Number of Nodes | Number of Links | Network Density |\\n|---------|-----------------|-----------------|-----------------|\\n| WX100 | 100 | 500 | 0.05 |\\n| WX500 | 500 | 13,000 | 0.1042 |\\n| GEANT | 40 | 64 | 0.0821 |\\n| BRAIN | 161 | 166 | 0.0129 |\\n\\nAdditionally, we further consider the evaluation in various network conditions, such as varying arrival rates of VN requests and dynamic request distribution. To our best knowledge, this comprehensive evaluation setup is among the most thorough in the literature [e,f,g,h]. The chosen topologies effectively demonstrate CONAL's scalability, adaptability, and efficiency across different networking contexts.\\n\\n[e] Sheng Wu, et al. AI-Empowered Virtual Network Embedding:A Comprehensive Survey. IEEE Communications Surveys & Tutorials, 2024.\\n\\n[f] Song Yang, et al. Recent Advances of Resource Allocation in Network Function Virtualization. TPDS, 2021.\\n\\n[g] Haoyu Geng, et al. GAL-VNE: Solving the VNE Problem with Global Reinforcement Learning and Local One-Shot Neural Prediction. KDD, 2023.\\n\\n[h] More works in our submission's reference\"}",
"{\"comment\": \"Thank you for your thoughtful and detailed rebuttal to my comments and questions. Most of my concerns have been addressed, and several points of confusion have been clarified. I have slightly increased my score to reflect my improved understanding of your work after reviewing your response and the additional materials provided.\\n\\nI appreciate that your work prioritises real system integration rather than focusing solely on proposing another algorithmic approach, which is a highly commendable motivation. As noted in my previous review, real system implementation prefers testing across diverse scenarios and performance considerations. However, I remain partially unconvinced that the perspectives chosen in this work fully align with the key priorities for real-world NFV networks, particularly given concerns about increased computational complexity.\\n\\nThe ML/AI contributions in this work also seem somewhat narrow in scope. A better articulation of how these contributions advance the field and address specific practical challenges would further strengthen the paper and demonstrate its broader impact beyond incremental improvements. While I appreciate the overall effort, my assessment places this work closer to a 6 than an 8, as the system does not allow a score of 7.\"}",
"{\"comment\": \"Some of our concerns have been addressed. I have slightly increased my score to reflect my improved understanding of your work. However, there are still some ambiguities.1) Why can all constraints be strictly adhered to after training? Is there a theoretical guarantee for this? 2) Why heuristic schemes cannot adapt well to the dynamic changes of the network, while DRL can, is unclear. Why DRL can scale more efficiently to large complex networks is unclear.\"}",
"{\"title\": \"Author Response to Q3\", \"comment\": \"> **Q3**: Elaborate on the rationale behind using contrastive learning in the constraint-aware graph representation module.\\n\\nThank you for your feedback. We leverage contrastive learning (CL) in the constraint-aware graph representation module to enhance the model\\u2019s awareness of complex constraints, particularly its sensitivity to bandwidth constraints, which are critical yet intricate in VNE scenarios. Below, we elaborate on the rationale:\\n\\n- Complexity of bandwidth constraints: Bandwidth constraints play a pivotal role in determining solution feasibility, particularly in the context of path routing complexity. At each decision timestep, we need to carefully select a physical node nt p for placing the current virtual node nt v . This selection is dominated by ensuring that feasible connective paths exist to all other physical nodes hosting the virtual node\\u2019s neighbors. Here, the feasibility of the path is dominated by the bandwidth availability of physical links to support the bandwidth requirements of all prepared incident links \\u03b4\\u2032(nt v ). Accurately representing these constraints is vital for generating high-quality embeddings that ensure feasible embeddings for VNE instances.\\n- Limitation of Existing GNNs: Conventional GNNs build up on the propagation mechanism that aggregates information along graph links to capture topology information. However, not all physical links contribute positively to this awareness; some with insufficient bandwidths may even introduce noise into node representations. Therefore, these GNNs lack an explicit mechanism to integrate bandwidth constraints, which are essential for perceiving path feasibility. This limitation highlights the need for a more sophisticated approach that incorporates bandwidth awareness into the representation learning process.\\n- Motivation for using CL. We utilize CL to address these challenges by explicitly incorporating bandwidth constraint awareness into the graph representation learning process. In the context of bandwidth-constrained VNE, we devise several augmentation methods (e.g., virtual/physical link additions/deletions) to create diverse yet semantically equivalent views of the network graph, introducing variations in connectivity while preserving feasibility. Then, we use the contrastive loss, specifically, Barlow Twins loss, which emphasizes the alignment of representations for bandwidth-feasible paths while penalizing infeasible ones. Theoretically, Barlow Twins minimizes redundancy between embeddings of augmented views by aligning their cross-correlation matrix with the identity matrix [a]. For bandwidth awareness, this mechanism suppresses the influence of irrelevant features (e.g., links with surplus bandwidth) while amplifying features critical to determining bandwidth feasibility in the GNN propagation process [b]. This facilitates embeddings to reflect the feasibility of physical link connectivity, which is critical for effective VNE policies.\\nOverall, by considering contrastive learning, the constraint-aware graph representation module achieves enhanced bandwidth sensitivity, and improves a deeper understanding of path feasibility, all of which are critical for addressing the complex constraints of VNE scenarios.\\n\\n[a] Jure Zbontar, et al. Barlow Twins: Self-Supervised Learning via Redundancy Reduction. ICML, 2021.\\n\\n[b] Yihao Xue, et al. Investigating Why Contrastive Learning Benefits Robustness against Label Noise. ICML, 2022.\\n\\nAgain, we thank the reviewer for your in-depth suggestions for improving our submission. We hope the responses address your concerns. Thank you for your time and consideration.\"}",
"{\"summary\": \"The paper proposes a new framework called constraint-Aware Learning (CONAL) to address the Virtual Network Embedding (VNE) problem in network virtualization. Specifically, the paper models the VNE problem as a violation-tolerant CMDP and introduces an adaptive reachability budget (ARB) to handle unsolvable instances.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper models the VNE problem as a violation-tolerant CMDP and introduces an adaptive reachability budget (ARB) to handle unsolvable instances.\", \"weaknesses\": \"1. The paper models the VNE problem as a violation-tolerant CMDP and introduces an adaptive reachability budget (ARB) to handle unsolvable instances. However, when dealing with unsolvable instances, no policy can satisfy the constraints, the Lagrange multiplier \\u03bb may tend to infinity, leading to numerical instability during training. Instability may affect the policy's performance on solvable instances. Provide empirical evidence of the behavior of the Lagrange multiplier \\u03bb during training. Specifically, plot the variation of \\u03bb over training iterations or time to illustrate how it evolves, especially in the presence of unsolvable instances.\\n2. The augmentation methods used in the path-bandwidth contrast module (physical link addition \\u03d5A and virtual link addition \\u03d5B) lack sufficient theoretical and empirical justification. The choice of augmentation ratio \\u03f5 significantly affects model performance, but the paper does not provide detailed analysis or guidelines for selecting these parameters. Provide theoretical explanations for how the augmentation methods contribute to improved bandwidth awareness. \\n3. The integration of virtual and physical networks into a heterogeneous graph with numerous cross-graph links can lead to information redundancy and noise. Noise from irrelevant links can hinder the model's ability to learn meaningful representations.\\n4. The experiments are mainly conducted on simulated environments and limited network topologies (e.g., GEANT and BRAIN). This may not adequately demonstrate the model's performance.\\n5. Given the rapid development of NFV, some relevant literatures are missing to be discussed, e.g., NFVdeep: Adaptive Online Service Function Chain Deployment with Deep Reinforcement Learning, iwqos\\u201919; Adaptive VNF Scaling and Flow Routing with Proactive Demand Prediction, infocom\\u201918; FlexNFV: Flexible Network Service Chaining with Dynamic Scaling, network\\u201919; Joint Optimization of Chain Placement and Request Scheduling for Network Function Virtualization, icdcs\\u201917, etc.\", \"questions\": \"Overall\\uff0cwhen infeasible instances exist in the Virtual Network Embedding (VNE) problem (i.e., there are no embedding solutions that satisfy all constraints), the optimization method employed in the paper causes the Lagrange multipliers (\\u03bb) to grow unbounded during training. The unbounded growth of \\u03bb leads to overflows or underflows in numerical calculation. The paper does not provide a robust method for detecting infeasible instances, nor does it implement any controls or mitigations for the growth of \\u03bb. This oversight means that, when faced with infeasible instances, the model may fail to operate correctly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Further Response to Reviewer 9yrG\", \"comment\": \"Thank you for your constructive and thoughtful feedback, as well as for taking the time to reevaluate our work. We greatly appreciate your insightful comments and would like to take this opportunity to clarify and further articulate our contributions to the ML/AI field.\\n\\nOur work focuses on advancing machine learning for combinatorial optimization (ML4CO), a research area that has garnered significant attention in the ML community, as discussed in Related Work and highlighted in work `[a,b]`. While most existing studies in ML4CO primarily address classical problems such as Traveling Salesperson Problem (TSP) [c,d], Vehicle Routing Problem (VRP) `[e,f]`, Job Shop Scheduling Problem (JSSP) `[g,h]`, and VNE `[i,j]`, they often overlook the complex constraints. However, many real-world applications are modeled as combinatorial optimization (CO) problems with complex constraints that are critical for real-world applicability. Effectively learning and managing these constraints in ML4CO remains a significant and challenging direction that has yet to be actively explored.\\n\\nIn this paper, we aim to push the boundaries of ML4CO by addressing a highly challenging constrained CO problem, i.e., VNE. VNE is characterized by hard and intricate constraints such as cross-graph resource allocation and bandwidth-constrained path routing. Distinct from most prior works, our focus lies in addressing the significant complexity introduced by these constraints and their practical implications. Below, we review our key contributions:\\n\\n- *Revealing the Impact of Unsolvable Instances*: Through experimental observation and theoretical proof, we highlight the negative impact of unsolvable instances, which are inevitable in practical environments. Our analysis shows that such instances hinder the training process and policy performance, an issue largely overlooked in existing ML4CO research.\\n- *Innovative RL Optimization*: To stabilize training in the presence of unsolvable solutions, we propose a novel adaptive reachability budget. This innovation prevents divergence, ensures robust convergence in constrained scenarios, and is easily generalizable to other constrained CO problems.\\n- *Rethinking CMDP Modeling*: While existing works often model CO problems directly as MDPs or CMDPs, they simply stop when strict constraints are violated. we address this gap by introducing violation-tolerant CMDP modeling. This enables the complete exploration and precise evaluation of solution space, thus improving performance.\\n- *Well-designed Graph Representation*: Capturing complex constraints, particularly bandwidth-constrained path routing, is an unexplored and challenging area in GNN and ML4CO research. To address this, we design novel graph augmentation methods and leverage contrastive learning (CL) to improve bandwidth awareness and constraint representation. \\n\\nOur work addresses the unique challenges posed by highly constrained CO problems, a less explored area in ML4CO, specifically in VNE. By revealing critical insights, and rethinking CMDP modeling, RL optimization, and GNN representation, we provide a pathway for applying ML to solve more complex and constrained CO problems across diverse domains. We believe our approach extends the frontiers of ML for CO by introducing new paradigms in modeling, optimization, and representation, which are also broadly applicable to other constrained CO Problems beyond VNE.\\n\\nThank you again for your feedback, which has been invaluable in reviewing our contributions and articulating their broader impact. We hope this response clarifies the significance of our work and its potential to advance ML4CO and VNE research.\\n\\nReference\\n> [a] Yoshua Bengio, et al. Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon. EJOR, 2020.\\n>\\n> [b] [Awesome Machine Learning for Combinatorial Optimization Resources](https://github.com/Thinklab-SJTU/awesome-ml4co)\\n> \\n> [c] Yifan Xia, et al. Position: Rethinking Post-Hoc Search-Based Neural Approaches for Solving Large-Scale Traveling Salesman Problems. ICML, 2024\\n> \\n> [d] Yimeng Min, et al. Unsupervised Learning for Solving the Travelling Salesman Problem. NeurIPS, 2023.\\n> \\n> [e] Qingchun Hou, et al. Generalize Learned Heuristics to Solve Large-scale Vehicle Routing Problems in Real-time. ICLR, 2023.\\n>\\n> [f] Jianan Zhou, et al. Towards Omni-generalizable Neural Methods for Vehicle Routing Problems ICML, 2023.\\n>\\n> [g] David W Zhang, et al. Robust Scheduling with GFlowNets. ICLR, 2023.\\n> \\n> [h] Wonseok Jeon, et al. Neural DAG Scheduling via One-Shot Priority Sampling. ICLR, 2023.\\n>\\n> [i] Tianfu Wang, et al. FlagVNE: A Flexible and Generalizable Reinforcement Learning Framework for Network Resource Allocation. IJCAI, 2024.\\n>\\n> [j] Haoyu Geng, et al. GAL-VNE: Solving the VNE Problem with Global Reinforcement Learning and Local One-Shot Neural Prediction. KDD, 2023.\"}",
"{\"title\": \"Author Response to W1-4\", \"comment\": \"We sincerely appreciate the time and effort you have dedicated to reviewing our submission and for providing insightful comments and constructive feedback. Below, we address each of your concerns in detail.\\n\\n> **W1**: The VNE problem has been investigated several times in the related literature. To be impactful, there is the need that novel solutions in the field do not propose only an algorithmic solution but also the design and implementation of a prototype integrated into real cloud/edge deployment environments. Otherwise, the level of technical originality and relevance could only be limited, given the status of maturity of the research field\\n>\\n> **W2**: The paper does not include in-depth technical insights about how to exactly achieve an effective and efficient design/implementation of the proposed solution into a real prototype. No lessons learned from the experience of real deployment and evaluation in in-the-field deployment scenarios\\n>\\n> **W3**: No systems engineering considerations and lessons learned about how to optimally configure and deploy the proposed solution.\\n\\nThank you for your suggestion. Our work, like many algorithm-focused studies on VNE [a,b,c], emphasizes the development and evaluation of a generalizable and robust algorithmic framework. To ensure comparability with existing research, we adopt widely used benchmarks and experimental setups from the literature. While we acknowledge the importance of prototype design and real-world deployment, our focus is on advancing algorithmic innovations, such as RL and GNN, which are of particular interest to the machine learning community, including venues like ICLR. In future work, we will attempt to deploy our method in real-world settings to bridge the gap between research and practical applications.\\n\\n[a] Sheng Wu, et al. AI-Empowered Virtual Network Embedding:A Comprehensive Survey. IEEE Communications Surveys & Tutorials, 2024.\\n\\n[b] Haoyu Geng, et al. GAL-VNE: Solving the VNE Problem with Global Reinforcement Learning and Local One-Shot Neural Prediction. KDD, 2023.\\n\\n[c] Tianfu Wang, et al. FlagVNE: A Flexible and Generalizable Reinforcement Learning Framework for Network Resource Allocation. IJCAI, 2024.\\n\\n> **W4**: The reported performance results are obtained by adopting simulation assumptions that are too simplistic and not realistic for many real deployment environments. I can understand that other papers in the literature have adopted a similar approach, but this is too simplistic. At least the validity of the used assumptions should be better justified and motivated. In addition, why not using real traces from real deployment environments, in particular for request demands?\\n\\nThank you for your feedback. We understand your concern regarding the reliance on simulated environments and the lack of real-world traces. Unfortunately, to the best of our knowledge, there are currently no publicly available datasets. We have carefully reviewed the literature [a and all works in submission reference] and found that nearly all publications in top journals and conferences addressing the VNE problem similarly rely on simulation benchmarks to evaluate their algorithms. To address potential concerns, we designed our simulation datasets meticulously to closely resemble real-world networking conditions. For instance, we incorporated widely accepted topologies, realistic virtual network request patterns, and diverse resource constraints. Detailed information about our simulation settings and the rationale behind them is provided in the manuscript to ensure transparency and reproducibility. These benchmarks, while simulated, have been crafted to provide a approximation of real-world scenarios and are validated by their widespread adoption in the community. We will also continue to pay attention to whether available real datasets are released in the future and verify our proposed method based on it.\\n\\n[a] Sheng Wu, et al. AI-Empowered Virtual Network Embedding:A Comprehensive Survey. IEEE Communications Surveys & Tutorials, 2024.\"}",
"{\"title\": \"Author Response to Q2\", \"comment\": \"> **Q2**: Provide a detailed analysis of the computational complexity of the proposed method, especially compared to baseline methods.\\n\\nThanks for your feedback. Regarding the analysis and comparison of computational complexity, in our submission, we mentioned these aspects in Section 3.4 and provided a detailed analysis in Appendix 4.6. Below is the relevant content in our submission:\\n\\n\\\"CONAL exhibits a computational complexity of $O\\\\left(|N_v| \\\\cdot K \\\\cdot \\\\left(|L_p|d + |N_p+N_v| d^2\\\\right)\\\\right)$, which the complexities other baseline methods based on RL and GNNs are $O\\\\left(|N_v| \\\\cdot K \\\\cdot \\\\left(|L_p|d + |N_p| d^2\\\\right)\\\\right)$. Here, $N_v$ and $L_v$ denote the number of virtual nodes and links, $N_p$ and $L_p$ denote the number of physical nodes and links, $K$ denotes the number of GNN layers, and $d$ denotes the embedding dimension. Concretely, When constructing a solution for one VNE instance, CONAL performs inference $N_v$ times with the GNN policy, similar to most RL and GNN-based methods. The difference in complexity between CONAL and RL/GNN-based baselines mainly arises from the different neural network structures used, such as GAT and GCN. One GAT and one GCN layer have the same complexity, both $O\\\\left(|L|d + |N| d^2\\\\right)$, where $|N|$ and $|L|$ denote the number of nodes and links [a]. In CONAL, we enhance the GAT with the heterogeneous modeling for the interactions of cross-graph status. Each heterogeneous GAT layer consists of three types of GAT layers for VN, PN, and cross-graph interactions (the number of links between virtual and physical nodes is always $N_p$). The complexities for these layers are $O\\\\left(|L_v|d + |N_v| d^2\\\\right)$, $O\\\\left(|L_p|d + |N_p| d^2\\\\right)$, and $O\\\\left(|N_p|d + |N_p+N_v| d^2\\\\right)$, respectively. Each heterogeneous GAT layer consists of three types of GAT layers for VN, PN, and cross graph (the number of links between virtual and physical node always is $N^p$), whose complexity is $O\\\\left(|L_v|d + |N_v| d^2\\\\right)$, $O\\\\left(|L_p|d + |N_p| d^2\\\\right)$, and $O\\\\left(|N_p|d + |N_p+N_v| d^2\\\\right)$. Considering that $|N_v|$ is significantly smaller than $|L_v|$ and typically $L_p > N_p$ in practical network systems, the overall complexity of CONAL is $O\\\\left(|N_v| \\\\cdot K \\\\cdot \\\\left(|L_p|d + |N_p+N_v| d^2\\\\right)\\\\right)$. In comparison, other RL and GNN-based methods separately encode VN and PN with GAT or GCNs, without considering GNN layers for cross-graph interactions, leading to their complexities being $O\\\\left(|N_v| \\\\cdot K \\\\cdot \\\\left(|L_p|d + |N_p| d^2\\\\right)\\\\right)$. Overall, while CONAL slightly increases the complexity compared to existing RL and GNN-based methods due to its heterogeneous modeling approach, it achieves significant performance improvements.\\\"\"}",
"{\"title\": \"Looking Forward to Further Discussion\", \"comment\": \"Dear Reviewer z4vG,\\n\\nWe sincerely appreciate your dedicated time and effort in reviewing our submission and providing thoughtful feedback. We greatly value your insights and hope our responses have adequately addressed your concerns. \\n\\nAs the discussion deadline approaches, we would like to kindly invite you to share any additional feedback or questions you may have. We are more than happy to provide further details or clarifications.\\n\\nThank you once again for your thoughtful comments. We look forward to hearing any further thoughts you might have.\\n\\nBest regards,\\n\\nThe authors\"}",
"{\"title\": \"Further Response to Reviewer z4vG on NEW-Q2 (DRL for Networking)\", \"comment\": \"Dear Reviewer z4vG,\\n\\nWe would like to further elaborate on the transformative potential of DRL in the networking domain, particularly its ability to address complex and dynamic challenges. DRL has increasingly demonstrated its effectiveness across a broad range of networking tasks, showcasing adaptability and scalability that meet the intricate demands of modern networks.\\n\\nThe networking community has widely recognized DRL\\u2019s success in diverse scenarios, highlighting its capacity to transform how networking challenges are addressed. Notable applications include network planning [a], adaptive multi-timescale scheduling [b], bandwidth-adaptive compression [c], optimizing control frameworks for RANs [d], resource allocation in NFV [e], and so on. These studies consistently demonstrate that DRL outperforms traditional heuristics by learning optimal strategies in dynamic environments. Its ability to adapt to fluctuating conditions, maintain robust performance, and tackle real-world complexities underscores DRL\\u2019s significant value in networking research and practical applications.\\n\\nCollectively, these advancements illustrate DRL\\u2019s broad applicability and substantial impact. Unlike static heuristic methods, DRL employs a data-driven learning paradigm, enabling it to dynamically adapt to diverse conditions, optimize resource utilization, and make decisions in complex, real-time scenarios. By interacting with the environment, DRL develops strategies that generalize effectively across varying network states and scales efficiently to handle large and intricate topologies.\\n\\nIn our work, we propose a DRL-based solution with constraint awareness to tackle the VNE problem, a constrained combinatorial optimization challenge. Our method significantly enhances the model\\u2019s capability to handle intricate constraints, advancing the state of the art in VNE research. Furthermore, our contributions extend to the broader field of machine learning-driven networking solutions, providing a robust framework for addressing the pressing challenges of modern networks.\\n\\nWe sincerely thank you for your thoughtful feedback. We hope this additional perspective further clarifies your ambiguities. We look forward to engaging in further discussion and receiving your insights.\\n\\nSincerely,\\n\\nThe Authors\\n\\nReferences\\n\\n[a] Hang Zhu, et al. Network planning with deep reinforcement learning. SIGCOMM, 2021.\\n\\n[b] Yijun Hao, et al. EdgeTimer: Adaptive Multi-Timescale Scheduling in Mobile Edge Computing with Deep Reinforcement Learning. INFOCOM, 2024.\\n\\n[c] Muhammad Osama Shahid, et al. Cloud-LoRa: Enabling Cloud Radio Access LoRa Networks Using Reinforcement Learning-Based Bandwidth-Adaptive Compression. NSDI, 2024.\\n\\n[d] Azza H. Ahmed, et al. Deep reinforcement learning-based control framework for radio access networks. MOBICOM, 2022.\\n\\n[e] Zeng Y, et al. SafeDRL: Dynamic Microservice Provisioning With Reliability and Latency Guarantees in Edge Environments. IEEE Transactions on Computers, 2023.\"}",
"{\"title\": \"Global Clarification on Impacts\", \"comment\": \"Dear Reviewers,\\n\\nPlease allow us to further articulate our contributions to the ML/AI field, beyond the VNE problem.\\n\\nOur work focuses on advancing machine learning for combinatorial optimization (ML4CO), a research area that has garnered significant attention in the ML community, as discussed in Related Work of our submission and highlighted in work [a,b]. While most existing studies in ML4CO primarily address classical problems such as Traveling Salesperson Problem (TSP) [c,d], Vehicle Routing Problem (VRP) [e,f], Job Shop Scheduling Problem (JSSP) [g,h], and VNE [i,j], they often overlook the complex constraints. However, many real-world applications are modeled as combinatorial optimization (CO) problems with complex constraints that are critical for real-world applicability. Effectively learning and managing these constraints in ML4CO remains a significant and challenging direction that has yet to be actively explored.\\n\\nIn this paper, we aim to push the boundaries of ML4CO by addressing a highly challenging constrained CO problem, i.e., VNE. VNE is characterized by hard and intricate constraints such as cross-graph resource allocation and bandwidth-constrained path routing. Distinct from most prior works, our focus lies in addressing the significant complexity introduced by these constraints and their practical implications. Below, we review our key contributions:\\n\\n- *Revealing the Impact of Unsolvable Instances*: Through experimental observation and theoretical proof, we highlight the negative impact of unsolvable instances, which are inevitable in practical environments. Our analysis shows that such instances hinder the training process and policy performance, an issue largely overlooked in existing ML4CO research.\\n- *Innovative RL Optimization*: To stabilize training in the presence of unsolvable solutions, we propose a novel adaptive reachability budget. This innovation prevents divergence, ensures robust convergence in constrained scenarios, and is easily generalizable to other constrained CO problems.\\n- *Rethinking CMDP Modeling*: While existing works often model CO problems directly as MDPs or CMDPs, they simply stop when strict constraints are violated. we address this gap by introducing violation-tolerant CMDP modeling. This enables the complete exploration and precise evaluation of solution space, thus improving performance.\\n- *Well-designed Graph Representation*: Capturing complex constraints, particularly bandwidth-constrained path routing, is an unexplored and challenging area in GNN and ML4CO research. To address this, we design novel graph augmentation methods and leverage contrastive learning (CL) to improve bandwidth awareness and constraint representation. \\n\\nOur work addresses the unique challenges posed by highly constrained CO problems, a less explored area in ML4CO, specifically in VNE. By revealing critical insights, and rethinking CMDP modeling, RL optimization, and GNN representation, we provide a pathway for applying ML to solve more complex and constrained CO problems across diverse domains. We believe our approach extends the frontiers of M4CO by introducing new paradigms in modeling, optimization, and representation, which are also broadly applicable to other constrained CO Problems beyond VNE.\\n\\nWe hope this response clarifies the significance of our work and its potential to advance the ML4CO community beyond VNE research. We are committed to addressing any further questions or concerns you may have. \\n\\nReference\\n> [a] Yoshua Bengio, et al. Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon. EJOR, 2020.\\n>\\n> [b] [Awesome Machine Learning for Combinatorial Optimization Resources](https://github.com/Thinklab-SJTU/awesome-ml4co)\\n> \\n> [c] Yifan Xia, et al. Position: Rethinking Post-Hoc Search-Based Neural Approaches for Solving Large-Scale Traveling Salesman Problems. ICML, 2024\\n> \\n> [d] Yimeng Min, et al. Unsupervised Learning for Solving the Travelling Salesman Problem. NeurIPS, 2023.\\n> \\n> [e] Qingchun Hou, et al. Generalize Learned Heuristics to Solve Large-scale Vehicle Routing Problems in Real-time. ICLR, 2023.\\n>\\n> [f] Jianan Zhou, et al. Towards Omni-generalizable Neural Methods for Vehicle Routing Problems ICML, 2023.\\n>\\n> [g] David W Zhang, et al. Robust Scheduling with GFlowNets. ICLR, 2023.\\n> \\n> [h] Wonseok Jeon, et al. Neural DAG Scheduling via One-Shot Priority Sampling. ICLR, 2023.\\n>\\n> [i] Tianfu Wang, et al. FlagVNE: A Flexible and Generalizable Reinforcement Learning Framework for Network Resource Allocation. IJCAI, 2024.\\n>\\n> [j] Haoyu Geng, et al. GAL-VNE: Solving the VNE Problem with Global Reinforcement Learning and Local One-Shot Neural Prediction. KDD, 2023.\"}",
"{\"title\": \"Author Response to W1&Q1 and W2-3\", \"comment\": \"We sincerely appreciate the time and effort you have dedicated to reviewing our submission and for providing insightful comments and constructive feedback. Below, we address each of your concerns in detail.\\n\\n> **W1**: There has been some work examining about RL and NFV [1,2] and various approaches have been devised for solving the constraint violations therein, and it is not clear where this paper excels in relation to them.\\n>\\n> **Q1**: Compare their work with more existing studies on solving constraint violations in applying RL to NFV \\uff08not just the papers mentioned above\\uff09, and compare it with them in experiments and related works.\\n\\nThank you for your suggestion. In the revised manuscript, we have expanded the Related Work section to include comparisons with the mentioned studies and more works. The main related added discussions are as follows\\n- Lines 830-837 on Page 16: In particular, Gu et al. (2020) proposed a model-assisted DRL framework that leverages heuristic solutions to guide the training process, reducing reliance on the agent's blind exploration of actions. However, they struggle to handle such complex constraints of VNE thereby compromising performance. Zeng et al. (2024) introduced the SafeDRL algorithm that corrects constraint violations using high-quality feasible solutions through expert intervention. However, this reliance on external corrections ignores the aspects of policy-level constraints awareness, which may limit its adaptability and performance. To address these challenges, we explore learning a constraint-aware VNE policy by innovating existing MDP modeling, representation learning, and policy optimization.\\n\\nIn addition, we will provide the empirical comparison in the subsequent response.\\n\\n> **W2**: Why use DRL for VNE when heuristics do not always face high time overhead? \\n\\nThank you for your question. Heuristic methods have traditionally been favored for VNE due to their simplicity and efficiency; however, they come with significant limitations. Heuristics rely on manually designed rules and expert knowledge, which often fail to generalize across varying network conditions or novel scenarios [a]. Their static nature can lead to suboptimal decision-making, particularly when handling complex trade-offs and constraints inherent in VNE. In contrast, DRL eliminates the need for such manual designs by automatically learning super-heuristics through interaction with the environment [b]. This data-driven approach enables DRL to identify efficient patterns and solutions that are not readily apparent to human designers. Furthermore, DRL dynamically adjusts to changes in network conditions and scales effectively to large and complex networks, making it particularly suitable for real-world VNE scenarios where resource availability and topologies are constantly evolving. Our experimental results highlight that DRL-based methods consistently usually heuristic approaches across critical metrics. Importantly, the inference phase of DRL exhibits competitive time efficiency with heuristics. As shown in Appendix G.2, the slight increase in time overhead during inference is offset by the substantial gains in solution quality. Overall, while heuristics provide simplicity, their adaptability and performance are limited. DRL addresses these limitations by delivering significant performance improvements while maintaining competitive time efficiency, offering a high-quality solution for VNE.\\n\\n[a] Yoshua Bengio, et al. Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon. EJOR, 2020.\\n\\n[b] Federico Berto, et al. RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark. NeurIPS GLFrontiers Workshop, 2023.\\n\\n> **W3**: The fact that real-world system validation is not based on real-world system implementations but is still based on simulation should not be blown out of proportion.\\n\\nThank you for your feedback. Following most existing studies [c, d], we have conducted simulation-based evaluations on several real-world network topologies, which is a widely adopted approach in the field. To avoid potential misinterpretation, we have updated the section title in the revised manuscript to \\\"Real-world Network Topology Validation\\\".\\n\\n[c] Nan He, et al. Leveraging deep reinforcement learning with attention mechanism for virtual network function placement and routing. TPDS, 2023.\\n\\n[d] Zhongxia Yan, et al. Automatic Virtual Network Embedding: A Deep Reinforcement Learning Approach with Graph Convolutional Networks. JSAC, 2020.\"}",
"{\"title\": \"Further Response to Reviewer z4vG on NEW-Q1\", \"comment\": \"Thank you for your thoughtful feedback and for revisiting our work. We are pleased that some of your concerns have been addressed, and we appreciate the opportunity to clarify the remaining ambiguities.\\n\\n> **NEW-Q1**: Why can all constraints be strictly adhered to after training? Is there a theoretical guarantee for this?\\n\\nThank you for your insightful question. While our training method has theoretical guarantees, it is important to note that the trained policy may not always strictly adhere to all constraints under every case. Below, we clarify both the theoretical guarantees of our approach and the potential factors affecting constraint satisfaction in practice:\\n\\n**Theoretical Guarantees in Training**: Our method employs a Lagrangian PPO-based method with reachability analysis to train a policy with strict adherence to constraints. Reachability analysis establishes the largest feasible set\\u2014a subset of states where constraints can be persistently satisfied [a]. This is achieved through a feasible value function, which quantifies the worst-case constraint violations over time, guiding the learning process to avoid states that risk violating constraints. By incorporating these insights, our method proactively learns policies that operate within this feasible set. \\nThis Lagrangian-based PPO training method for reachability analysis has been proven through multi-time scale stochastic approximation theory in work [b]. This ensures convergence of the learned policy to a local optimum, where all safety constraints are satisfied within the feasible set.\\n\\n**Practical Challenges in Adherence**: Despite the theoretical robustness, strict constraint adherence in practice may be influenced by several factors: \\n- *Insolvable Instances*: When no feasible solution exists for one instance (e.g., due to overly restrictive constraints or insufficient resources), strict adherence is inherently unattainable. \\n- *Feature Representation Limitations*: The expressiveness of the feature representation is critical for accurately perceiving state constraints and improving the quality of the solution. Insufficient representations may lead to suboptimal decisions and constraint violations. This underscores the significance of our constraint-aware graph representation, which alleviates this limitation by enhancing the model's ability to represent complex constraints, such as bandwidth feasibility. However, challenges may still persist in scenarios with exceptionally intricate constraint conditions.\\n- *RL Optimization Challenges*: Practical RL training often faces challenges such as local minima or insufficient exploration, which may result in suboptimal policies that do not strictly adhere to all constraints. These are well-known limitations in RL and are not unique to our method [c]. While our adaptive reachability budget method mitigates these issues by providing additional stability during training, it still remains an open problem in DRL.\\n\\nOverall, our approach is trained with theoretical guarantees for operating within the largest feasible set, and addresses key challenges like unsolvable instances, complex constraints, and stable optimization. While practical factors such as feature representation quality and optimization dynamics may affect strict adherence, our advancements in CMDP modeling, constraint-aware graph representation, and adaptive optimization result in a more efficient solution for VNE.\\n\\n[a] Somil Bansal, et al. Hamilton-Jacobi Reachability: A Brief Overview and Recent Advances. CDC, 2017.\\n\\n[b] Dongjie Yu, et al. Reachability Constrained Reinforcement Learning. ICML, 2022.\\n\\n[c] Shangding Gu, et al. A Review of Safe Reinforcement Learning: Methods, Theories and Applications. TPAMI, 2024.\"}",
"{\"title\": \"Author Response to W1&Q1 and W2\", \"comment\": \"We sincerely appreciate the time and effort you have dedicated to reviewing our submission and for providing insightful comments and constructive feedback. Below, we address each of your concerns in detail.\\n\\n> **W1**: The framework seems to assume a static PN setting, which may not hold in dynamic environments like mobile edge computing.\\n> \\n> **Q1**: How does the proposed method perform in dynamic network environments where PN topology and resource availabilities change over time?\\n\\nWe appreciate your feedback. In this work, we mainly focus on the resource allocation problem in NFV networks, i.e., VNE problem. In dynamic environments like mobile edge computing, the PN topology and resource availabilities can fluctuate over time, introducing additional complexities. This requires additional mechanisms for handling network service migrations, scheduling and backup, which go beyond the scope of the resource allocation task addressed by CONAL.\\n\\nThat said, CONAL's inherent flexibility makes it adaptable to changes in network conditions, regarding both PN and VN. Specifically, CONAL's constraint-aware graph representation can accommodate changes in network topologies and resource availabilities, stemming from the GNN's adaptability and generalizability. This method allows it to handle resource fluctuations effectively at each given snapshot in dynamic network scenarios. \\n\\nTo evaluate CONAL's performance under dynamic conditions without explicitly incorporating additional algorithm designs for migration, scheduling or backup, we conducted experiments using a Dynamic Request Distribution Testing setup (referenced in Appendix G.2). These experiments simulate scenarios where VN topology and resource availability vary dynamically. The results demonstrate that CONAL effectively adapts to resource fluctuations from the VN perspective, highlighting its potential to generalize to dynamic environments, including similar dynamic conditions in PN.\\n\\n> **W2**: The focus is mainly on computing and bandwidth constraints. Other important factors, such as latency, reliability, and energy efficiency, are not addressed.\\n\\nThank you for your feedback. To emphasize the generalizability of proposed method and clarity of our contributions, this work focuses on developing a general framework for managing complex constraints in VNE. Among the various constraints in NFV-enabled networks, bandwidth and computing resources are often the most critical and general and, therefore, form the primary focus of this study. Other factors, such as latency, reliability, and energy efficiency, represent specific variations of the VNE problem. Our framework is flexible and can be extended to incorporate these constraints by adapting the CMDP formulation and the constraint-aware graph representation. We have incorporated these factors as future research directions.\"}",
"{\"title\": \"Further Response to Reviewer z4vG on NEW-Q2\", \"comment\": \"> **NEW-Q2**: Why heuristic schemes cannot adapt well to the dynamic changes of the network, while DRL can, is unclear. Why DRL can scale more efficiently to large complex networks is unclear.\\n\\nThank you for your insightful question. Heuristic schemes are typically based on manually designed rules informed by expert knowledge, which inherently limits their quality and adaptability. These rules are often tailored to specific scenarios and lack the flexibility required to handle dynamic environments. For example, network changes such as fluctuating traffic or resource availability often render static heuristics ineffective, as their rule-based nature cannot dynamically adjust to evolving conditions. This rigidity often leads to suboptimal decisions in environments with high dynamics. It is impractical to curate heuristics for all possible scenarios relying on human expertise, due to the overwhelming diversity and unpredictability of real-world conditions.\\n\\nIn contrast, DRL operates on a data-driven learning paradigm to derive effective policies, reducing dependence on manual rule design. DRL excels at handling such complexity and dynamics, which is why it is widely adopted in highly dynamic scenarios like autonomous vehicles [d], robotics [e] and order-dispatching [f]. By interacting with the environment, DRL agents explore and learn policies across diverse conditions. In our work, as mentioned in experimental settings, we train DRL agents in environments simulating a wide range of scenarios, including fluctuating traffic demands and resource availability, ensuring exposure to varying network states. Furthermore, DRL operates on the state-action-reward paradigm, where agents observe the current state and take actions that maximize long-term rewards. This paradigm further enables DRL-based solutions to generalize effectively to unseen situations and dynamically adapt to changing environments. \\n\\nRegarding the scalability, VNE, as an NP-hard problem, suffers from combinatorial explosion as network size increases. Heuristics, with their static design, struggle to handle this growth due to the exponential increase in complexity. In contrast, DRL leverages neural networks to approximate policies, enabling efficient handling of high-dimensional solution spaces. This capability is crucial for large-scale networks with numerous nodes, links, and dynamic conditions. Integrating GNNs into DRL further enhances scalability by providing structured representations of complex network topologies. This allows DRL-based methods to effectively encode intricate graph-structured constraints and optimize solutions.\\n\\nOur experiments demonstrate that CONAL and other advanced DRL-based methods often maintain high performance in both dynamic and large-scale network scenarios. This empirical evidence also underscores the scalability and adaptability of DRL compared to heuristic methods in VNE problem.\\n\\n[d] Shuo Feng, et al. Dense reinforcement learning for safety validation of autonomous vehicles. Nature, 2023.\\n\\n[e] Chen Tang, et al. Deep Reinforcement Learning for Robotics: A Survey of Real-World Successes. arXiv, 2024.\\n\\n[f] Zhaoxing Yang, et al. Rethinking Order Dispatching in Online Ride-Hailing Platforms. KDD, 2024.\\n\\nWe hope this response clarifies the remaining ambiguities. We are committed to addressing any further concerns and are grateful for your thoughtful feedback, which continues to improve our work.\"}",
"{\"summary\": \"This paper presents a Constraint-aware Network Abstraction Layer (CONAL) tailored for Virtual Network Embedding (VNE) to advance constraint management and improve training robustness, key factors for optimizing network system performance and reliability. By framing VNE as a violation-tolerant Constrained Markov Decision Process (CMDP), the authors aim to enhance solution quality and feasibility, ensuring complete solutions that accurately assess solution quality. The paper introduces a reachability-guided objective, paired with an adaptive feasibility budget method, to guarantee ongoing constraint satisfaction while reducing policy conservativeness and stabilizing policy optimization even with unsolvable instances. To address the complexity of VNE constraints, a constraint-aware graph representation is proposed, featuring a heterogeneous modeling module to capture cross-graph relationships and a path-bandwidth contrast module for heightened sensitivity to bandwidth constraints.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors propose a violation-tolerant Constrained Markov Decision Process (CMDP) modeling approach, which effectively evaluates solution quality and constraint violation levels, thereby enhancing solution feasibility and resource utilization efficiency.\", \"weaknesses\": [\"There has been some work examining about RL and NFV [1,2] and various approaches have been devised for solving the constraint violations therein, and it is not clear where this paper excels in relation to them.\", \"[1] Gu L, Zeng D, Li W, et al. Intelligent VNF orchestration and flow scheduling via model-assisted deep reinforcement learning[J]. IEEE Journal on Selected Areas in Communications, 2019, 38(2): 279-291.\", \"[2] Zeng Y, Qu Z, Guo S, et al. SafeDRL: Dynamic Microservice Provisioning With Reliability and Latency Guarantees in Edge Environments[J]. IEEE Transactions on Computers, 2023.\", \"Why applying DRL to cope with VNE is unclear, where existing heuristics do not always face high time overhead and it is not clear for what specific problems face what specific limited performance and why?\", \"The fact that real-world system validation is not based on real-world system implementations but is still based on simulation should not be blown out of proportion.\", \"The author claims that reinforcement learning learns effective strategies from unlabeled datasets, however, reinforcement learning actually learns strategies through interaction with the environment.\", \"Is constraint violation really acceptable for VNE ? Is it reasonable that constraint violations are allowed in the designed solution?\"], \"questions\": \"Compare their work with more existing studies on solving constraint violations in applying RL to NFV \\uff08not just the papers mentioned above\\uff09, and compare it with them in experiments and related works.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"I am OK with the revised version, thanks for the additional material\", \"comment\": \"Thanks for the kind and detailed rebuttal.\\nI have appreciated the additional material included, in particular the additional quantitative performance measurements. \\nI will consider these elements and slightly revise my review accordingly.\"}",
"{\"metareview\": \"The paper presents an RL algorithm that solves a combinatorial problem motivated from computer networking, namely, Virtual Network Embedding (VNE). The authors model this as a violation-tolerant Constrained Markov Decision Process. The authors also propose a constraint-aware graph representation method to efficiently learn cross-graph relations and constrained path connectivity in VNE.\\n\\nFrom a practical standpoint, this problem and fast ML solutions to it seem to be of interest to the networking community. Reviewers however raised concerns whether studying this particular combinatorial-optimization problem and improving performance over competitors constitutes an algorithmic contribution of interest to the ICLR community. Some reviewers questioned the technical novelty; I would add to that one would expect that, at this point, an ML solution to combinatorial optimization problems would be evaluated more than one class of problems. Another recurring issue among reviewers was that there is no guarantee that constraints are satisfied.\", \"additional_comments_on_reviewer_discussion\": \"Several reviewers remained concerned about the size of problems solved, the use of synthetic data, the purported latency for computation that would limit application to all but small examples and, most importantly, to the fact that solutions inevitably lead to violations of constraints.\"}",
"{\"title\": \"Looking Forward to Further Discussion\", \"comment\": \"Dear Reviewer S9pS,\\n\\nWe sincerely appreciate your dedicated time and effort in reviewing our submission and providing thoughtful feedback. We greatly value your insights and hope our responses have adequately addressed your concerns. \\n\\nAs the discussion deadline approaches, we would like to kindly invite you to share any additional feedback or questions you may have. We are more than happy to provide further details or clarifications.\\n\\nThank you once again for your thoughtful comments. We look forward to hearing any further thoughts you might have.\\n\\nBest regards,\\n\\nThe authors\"}",
"{\"title\": \"Author Response to W4-5\", \"comment\": \"> **W4**: The claim that reinforcement learning learns effective strategies from unlabeled datasets is misleading. RL learns through interaction with the environment.\\n\\nWe appreciate your feedback and the opportunity to clarify this point. Reinforcement Learning (RL) can learn strategies either through interactions with the environment or from unlabeled datasets, as demonstrated in prior works [e, f]. Both methods could be used to address the VNE problem. Here, we aim to emphasize the distinction between RL and supervised learning, particularly regarding the absence of reliance on labeled data. To avoid any misunderstanding, we have updated the related description as follows:\\n\\\"Recently, Reinforcement Learning (RL) has been a potential direction for VNE, which learns effective solving policies without the need of labeled datasets.\\\"\\nWe have updated the text accordingly in the revised manuscript, which is highlighted in blue.\\n\\n[e] Philip J. Ball, et al. Efficient Online Reinforcement Learning with Offline Data. ICML, 2023.\\n\\n[f] Tianhe Yu, et al. How To Leverage Unlabeled Data in Offline Reinforcement Learning. ICML, 2022.\\n\\n> **W5**: Is constraint violation really acceptable for VNE ? Is it reasonable that constraint violations are allowed in the designed solution?\\n\\nThank you for your feedback. VNE requires strict adherence to zero constraint violations. In CONAL, we only enable constraint violation tolerance during the training phase to generate complete solutions where constraints may otherwise be unsatisfiable. This approach facilitates a precise evaluation of solution quality and constraint violation degrees, which helps us learn effective strategies for handling highly constrained VNE scenarios. However, as stated in Lines 225\\u2013227 (Section 3.1, Page 5) of the manuscript, constraint violations are not permitted during inference. Once the policy has been trained, it operates under strict adherence to all constraints, ensuring feasibility and compliance in deployment scenarios.\\n\\nAgain, we thank the reviewer for your in-depth suggestions for improving our submission. We hope the responses address your concerns. Thank you for your time and consideration.\"}",
"{\"summary\": \"The paper proposes a solution based on a Constrained Markov Decision Process for resource allocation in NFV-enabled networks. The problem of resource allocation in those networks (called Virtual Network Embedding in the related literature) is well-known in the research community and several solutions for it have already been proposed. Anyway, the proposed solution is sufficiently original and shows to achieve good performance results if compared with the primary baselines already existing in the literature.\\nHowever, the paper is weak in terms of in-depth technical insights about how to efficiently implement the proposed solution, of insufficient experimental evaluation and validation, and of potential impact in the field (see the following parts of this review form).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The addressed topic is interesting and relevant, even if already well investigated in the related literature\", \"The proposed problem formulation and the deriving algorithmic solution are technically sound and do not exhibit big technical flaws\", \"The reported performance results are interesting and show that the proposed solution can outperform several related baselines in the existing literature\", \"The paper is generally well organized and well written\"], \"weaknesses\": [\"The VNE problem has been investigated several times in the related literature. To be impactful, there is the need that novel solutions in the field do not propose only an algorithmic solution but also the design and implementation of a prototype integrated into real cloud/edge deployment environments. Otherwise, the level of technical originality and relevance could only be limited, given the status of maturity of the research field\", \"The paper does not include in-depth technical insights about how to exactly achieve an effective and efficient design/implementation of the proposed solution into a real prototype. No lessons learned from the experience of real deployment and evaluation in in-the-field deployment scenarios\", \"No systems engineering considerations and lessons learned about how to optimally configure and deploy the proposed solution\", \"The reported performance results are obtained by adopting simulation assumptions that are not realistic for many real deployment environments. I can understand that other papers in the literature have adopted a similar approach, but this is too simplistic. At least the validity of the used assumptions should be better justified and motivated in the paper. In addition, why not using real traces from real deployment environments, in particular for request demands?\", \"Even if the paper is generally well organized and well written, a few writing inaccuracies are still present in the manuscript and call for some minor revision work in order to improve the paper presentation style. Only to mention one example: \\\"Addtional\\\" in page 24.\"], \"questions\": \"Please see the previous parts of this review form, in particular the weaknesses part above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer iEV6\", \"comment\": \"Dear Reviewer iEV6,\\n\\nThank you for your suggestion and for sharing the [link](https://www.sciencedirect.com/topics/computer-science/pcap-file). We have reviewed the resource link but were unable to find a directly applicable dataset. Additionally, after an extensive search for datasets, we still find that relevant datasets are limited.\\n\\nWe attribute this to the inherent complexity and emerging characteristics of NFV systems. The datasets required for this domain, particularly for VNE, are not only time-series data related to virtual machines in datacenters but also need to include user networking services represented in a graph format.\\n\\nWe remain committed to actively seeking real-world datasets that meet these specific requirements for further validation. \\n\\nWe deeply value your suggestion and sincerely thank you for your prompt response.\\n\\nBest regards,\\n\\nThe authors\"}",
"{\"title\": \"Author Further Response to Q1\", \"comment\": \"> **Q1**: Compare their work with more existing studies on solving constraint violations in applying RL to NFV \\uff08not just the papers mentioned above\\uff09, and compare it with them in experiments and related works.\\n\\nThank you for your question. To provide a more comparison analysis, we incorporated the mDDPG (JSAC, 2019) and the latest SafeDRL method (TC, 2023) proposed in [b] in the experiments. The mDDPG method leverages heuristic solutions to guide the training process and mitigate convergence challenges but does not explicitly address constraint violations in the VNE context. On the other hand, SafeDRL focuses on violation correction by using high-quality feasible solutions, specifically in the microservice provisioning task. We adapted both methods to the VNE problem setting and implemented them using the Virne framework to enable a fair comparison.\", \"the_key_performance_results_are_summarized_below\": \"| Method | VN RAC \\u2191 | LT R2C \\u2191 | LT REV (\\u00d710\\u2077) \\u2191 | AVG ST (\\u00d710\\u207b\\u00b9 s) \\u2193 |\\n|------------|----------------------|---------------------|-----------------|--------------------|\\n| mDDPG | 0.711 \\u00b1 0.017 | 0.493 \\u00b1 0.004 | 7.976 \\u00b1 0.152 | **3.341 \\u00b1 0.075** |\\n| SafeDRL | 0.746 \\u00b1 0.013 | 0.512 \\u00b1 0.004 | 9.007 \\u00b1 0.180 | 3.541 \\u00b1 0.144 |\\n| **CONAL** | **0.813 \\u00b1 0.042** | **0.614 \\u00b1 0.006** | **9.842 \\u00b1 0.091** | 4.180 \\u00b1 0.104 |\\n\\nThe results demonstrate that CONAL consistently outperforms both SafeDRL and mDDPG across key metrics.\\nThe superior performance of CONAL can be attributed to its tailored constraint-aware design, which addresses the intricate requirements of VNE scenarios.\\nBelow, we provide a design comparison of the core design features:\\n\\n| Feature | CONAL | SafeDRL | mDDPG |\\n|-----------------------|----------------------------------------------------|-------------------------------------------------|-----|\\n| **MDP Modeling** | Constrained MDP with violation tolerance | Traditional MDP Modeling without explicit consideration of constraint | Traditional MDP Modeling without explicit consideration of constraint |\\n| **Representation** | Constraint-aware graph representation | Multi-Layer Perceptron (MLP) | Multi-Layer Perceptron (MLP) |\\n| **Optimization** | Lagrangian PPO with adaptive reachability budget | DDPG | DDPG |\\n| **Handling Violations** | Proactive through modeling and optimization strategies | Violation correction via expert intervention | Not explicitly addressed |\\n\\nIn summary, CONAL\\u2019s violation-tolerant CMDP modeling, constraint-aware graph representation, and adaptive optimization techniques collectively enable it to address the unique complexities of VNE problem effectively.\\n\\n\\n[a] Gu L, Zeng D, Li W, et al. Intelligent VNF orchestration and flow scheduling via model-assisted deep reinforcement learning[J]. IEEE Journal on Selected Areas in Communications, 2019, 38(2): 279-291.\\n\\n[b] Zeng Y, Qu Z, Guo S, et al. SafeDRL: Dynamic Microservice Provisioning With Reliability and Latency Guarantees in Edge Environments[J]. IEEE Transactions on Computers, 2023.\"}",
"{\"title\": \"Author Response to W1&Q1 and W2\", \"comment\": \"We sincerely appreciate the time and effort you have dedicated to reviewing our submission and for providing insightful comments and constructive feedback. Below, we address each of your concerns in detail.\\n\\n> **W1 & Q1**: Provide empirical evidence of the behavior of the Lagrange multiplier \\u03bb during training. Specifically, plot the variation of \\u03bb over training iterations or time to illustrate how it evolves, especially in the presence of unsolvable instances.\\n\\nThank you for your suggestion. We have included an additional analysis of the behavior of \\u03bb during training in Appendix G.5. Specifically, we conducted experiments under arrival rates $\\\\eta = 0.14$ of VN requests, corresponding to moderate proportions of unsolvable instances. We compared the performance of CONAL with and without the ARB mechanism. The $\\\\lambda$ was monitored over 300 training steps on the WX100 topology, with results shown in Figure 11. As training progresses, the \\u03bb values in CONAL without ARB tend to diverge towards extreme values. In contrast, the results demonstrate that our ARB effectively stabilizes \\u03bb, preventing divergence and ensuring robust training while avoiding numerical instability. Please see Appendix G.5 for more details.\\n\\n> **W2**: The augmentation methods used in the path-bandwidth contrast module (physical link addition \\u03d5A and virtual link addition \\u03d5B) lack sufficient theoretical and empirical justification. The choice of augmentation ratio \\u03f5 significantly affects model performance, but the paper does not provide detailed analysis or guidelines for selecting these parameters. Provide theoretical explanations for how the augmentation methods contribute to improved bandwidth awareness.\\n\\nThank you for your feedback. In the path-contrast module, we devise several augmentation methods to create diverse yet semantically equivalent views of the network graph, introducing variations in connectivity while preserving feasibility. Then, we use the contrastive loss, specifically, Barlow Twins loss, which emphasizes the alignment of representations for bandwidth-feasible paths while penalizing infeasible ones. Theoretically, Barlow Twins minimizes redundancy between embeddings of augmented views by aligning their cross-correlation matrix with the identity matrix [a]. For bandwidth constraint awareness, this mechanism suppresses the influence of irrelevant features (e.g., links with surplus bandwidth) while amplifying features critical to determining bandwidth feasibility in the GNN propagation process. This principle has been validated in related studies [b], demonstrating its efficacy in various domains.\\n\\nThis augmentation-contrastive learning framework has been successfully applied in other fields. Typically, existing works design augmentations heuristically or apply stochastic perturbations to generate views while maintaining semantic equivalence, including computer vision [c], knowledge graphs [d], recommendation systems [e], etc. In our case, the augmentation methods (\\u03d5A and \\u03d5B) are carefully designed to create variations of heterogeneous graph while maintaining original feasibility. These augmentations enhance the model's ability to distinguish bandwidth-feasible paths by exposing it to a range of scenarios during training. This approach enables the GNN to learn robust and generalizable representations that prioritize bandwidth-critical features. \\n\\nRegarding the augmentation ratio \\u03f5, we provided a detailed analysis of its impact on model analysis in Appendix G.4 in our submission as follows. \\\"As the augment ratio initially increases from 0 to 1.0, we observe improvements across performance metrics. However, when the augment ratio is increased beyond 1.0, these improvements become marginal or even negative. This indicates that excessive enhancement of the graph structure can increase learning difficulty. The increasing disparity between the enhanced and original graph topologies may also negatively impact performance. This study reveals that a reasonable augment ratio \\u03b5 benefits the model by improving its sensitivity to bandwidth constraints. However, excessively high \\u03b5 values provide only slight improvements or can even degrade performance.\\\" We also haved included the guidence of selection in revised manuscript, \\\"Generally, setting \\u03b5 = 1.0 or a value close to it provides a balanced tradeoff between performance enhancement and model robustness.\\\"\\n\\n[a] Jure Zbontar, et al. Barlow Twins: Self-Supervised Learning via Redundancy Reduction. ICML, 2021.\\n\\n[b] Yihao Xue, et al. Investigating Why Contrastive Learning Benefits Robustness against Label Noise. ICML, 2022.\\n\\n[c] Ting Chen, et al. A Simple Framework for Contrastive Learning of Visual Representations. ICML, 2020.\\n\\n[d] Zheye Deng, et al. GOLD: A Global and Local-aware Denoising Framework for Commonsense Knowledge Graph Noise Detection. EMNLP, 2023\\n\\n[e] Xuheng Cai, et al. LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation. ICLR, 2023.\"}",
"{\"comment\": \"Thanks for the kind and detailed rebuttal.\\nI can understand your point about the fact that the majority of published articles have only an algorithmic approach (not aiming at real system integration), but anyway this is now insufficient IMHO, given the relative maturity level reached by the research activities in this field.\\nAbout the usage of simulated traffic traces, again I can understand the point that most papers adopt this approach. However, some datacenter traces start to be available and could be considered. For example, only via a rapid Google search you can find:\", \"https\": \"//www.sciencedirect.com/topics/computer-science/pcap-file\\n\\nAnyway, I will consider these elements and slightly revise my review accordingly.\"}",
"{\"title\": \"Author Response to W5-6\", \"comment\": \"> **W5**: Insufficient performance results about the latency introduced by the proposed solution. It is true that for large-scale deployment environments the response time of the proposed solution is better than other baselines, but the Authors do not focus on which is exactly the latency in low-medium scale scenarios: for several application domains, a latency of around 2s could be considered excessive and it is not clear, given the scale of the y axis in Figure 8, which are exactly the number of ms that are more common for classical deployment environments. Which limitations stem from that? For which application domains is the proposed solution not feasible?\\n\\nThank you for your feedback. VNE is a well-known NP-hard combinatorial optimization problem. As network size and complexity increase, solving time inherently grows due to the problem's computational nature. In Figure 8, we focus on evaluating the scalability of CONAL in large-scale scenarios, as this represents the most challenging and impactful use cases for real-world deployment. The results demonstrate that CONAL performs well in these scenarios, with better scalability and response times compared to baselines, highlighting its effectiveness in handling large-scale network environments.\\nFor smaller topologies, the solving time is significantly reduced. As detailed in Table 1, CONAL achieves solving times under 0.5 seconds for WX100 (with 100 nodes and 500 links). Furthermore, for real-world topologies like GEANT and BREAN, which have smaller scales, the solving times are even lower\\u2014approximately 0.09 seconds for GEANT (with 40 nodes and 64 links) and 0.2 seconds for BREAN (with 161 nodes and 166 links). These results underline that CONAL's time consumption is well within acceptable ranges for classical deployment environments in smaller-scale scenarios. In the revised manuscript, we have included the solving times for smaller topologies to highlight CONAL\\u2019s practical feasibility for low-to-medium scale scenarios in Appendix G.3.\\n\\n> **W6**: Even if the paper is generally well organized and well written, a few writing inaccuracies are still present in the manuscript and call for some minor revision work in order to improve the paper presentation style. Only to mention one example: \\\"Addtional\\\" in page 24.\\n\\nWe appreciate your attention to detail. We have corrected the specific example you pointed out and thoroughly proofread the manuscript to correct all typographical errors and improve the overall presentation. \\n\\nAgain, we thank the reviewer for your in-depth suggestions for improving our submission. We hope the responses address your concerns. Thank you for your time and consideration.\"}",
"{\"summary\": \"The paper tackles the VNE problem within NFV networks. Recognizing the limitations of existing solutions in handling intricate constraints and unsolvable instances, the authors propose a framework called Constraint-Aware Learning, formulates the VNE problem as a violation-tolerant constrained Markov Decision Process and introduces a reachability-guided optimization with adaptive reachability budgets. Additionally, the framework incorporates a constraint-aware graph representation method to capture cross-graph interactions and bandwidth-constrained path connectivity.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and tackles a significant problem, offering practical implications for real-world network systems and potential applicability to other optimization challenges. The performance is benchmarked against several state-of-the-art baselines. Experiments are conducted across a wide range of network scenarios\", \"weaknesses\": \"The framework seems assumes a static PN setting. This assumption may not hold in highly dynamic network environments, such as mobile edge computing.\\n\\nThe focus is mainly on computing and bandwidth constraints. Other important factors, such as latency, reliability, and energy efficiency, are not addressed.\", \"questions\": \"1. How does the proposed method perform in dynamic network environments where the physical network topology and resource availabilities could change over time?\\n\\n2. Can you provide a more detailed analysis of the computational complexity of the proposed method, especially in comparison to baseline methods? \\n\\n3. Could you elaborate on the rationale behind using contrastive learning in the constraint-aware graph representation module?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
EAUGN4pszX | FreRA: A Frequency-Refined Augmentation for Contrastive Learning on Time Series Classification | [
"Tian Tian",
"Chunyan Miao",
"Hangwei Qian"
] | Contrastive learning has emerged as a competent approach for unsupervised representation learning. However, the design of an optimal augmentation strategy, although crucial for contrastive learning, is less explored for time series classification tasks. Existing predefined time-domain augmentation methods are primarily adopted from vision and are not specific to time series data. Consequently, this cross-modality incompatibility may distort the global semantics of time series by introducing mismatched patterns into the data. To address this limitation, we present a novel perspective from the frequency domain and identify three advantages for downstream classification: 1) the frequency component naturally encodes global features, 2) the orthogonal nature of the Fourier basis allows easier isolation and independent modifications of critical and unimportant information, and 3) a compact set of frequency components can preserve semantic integrity. To fully utilize the three properties, we propose the lightweight yet effective Frequency-Refined Augmentation (FreRA) tailored for time series contrastive learning on classification tasks, which can be seamlessly integrated with contrastive learning frameworks in a plug-and-play manner. Specifically, FreRA automatically separates critical and unimportant frequency components. Accordingly, we propose Identity Modification and Self-adaptive Modification to protect global semantics in the critical frequency components and infuse variance to the unimportant ones respectively.
Theoretically, we prove that FreRA generates semantic-preserving views. Empirically, we conduct extensive experiments on two benchmark datasets including UCR and UEA archives, as well as 5 large-scale datasets on diverse applications. FreRA consistently outperforms 10 leading baselines on time series classification, anomaly detection, and transfer learning tasks, demonstrating superior capabilities in contrastive representation learning and generalization in transfer learning scenarios across diverse datasets. | [
"time series classification",
"contrastive learning",
"frequency domain"
] | Reject | https://openreview.net/pdf?id=EAUGN4pszX | https://openreview.net/forum?id=EAUGN4pszX | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xGmrJqlC5Q",
"wdXQzeJqFS",
"o94WrlrnyQ",
"o8n9gSDjT1",
"nRtiD8rjsx",
"lTxmbl1FsK",
"ilKCNwP20H",
"exaSBvTSKn",
"cVamfUvege",
"Z2DaOTNbIN",
"YY6zOoq81l",
"X7WU50YHJC",
"U1SBHk8CZP",
"TVUmEndlU5",
"SpJdkI36XK",
"RHymvK6fde",
"PFBC2uD6Dd",
"OjbWsFGk05",
"CUHGoMBmqx",
"CMc3umrfdJ",
"9Ueg8mwFa9",
"9OnJe3ep5X",
"91uT2Yhb1C",
"8xZtoM5MMv",
"8Yg5swHY8M",
"5nSZqUZv2R",
"4WfHfYl3VA"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1733060908744,
1732557688979,
1732282925515,
1732283136965,
1732544152340,
1732544208117,
1732282665413,
1737523776665,
1732614575692,
1732283275823,
1731076716099,
1732975684275,
1732283226409,
1733060823763,
1732283395255,
1732624384556,
1734831446890,
1732544040910,
1732282850392,
1732282780901,
1730653045713,
1730103167229,
1732283335277,
1732283177367,
1730882819147,
1732543954901,
1732282724211
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Reviewer_GiHe"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission6561/Reviewer_7F2V"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Reviewer_cXW7"
],
[
"ICLR.cc/2025/Conference/Submission6561/Reviewer_Qixi"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Area_Chair_ggEJ"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Reviewer_GiHe"
],
[
"ICLR.cc/2025/Conference/Submission6561/Reviewer_7F2V"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Reviewer_Qixi"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6561/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Further Responses to Reviewer Qixi [2/2]\", \"comment\": \"> Q4. Furthermore, using the contrastive objective to learn data augmentation is not reasonable enough for me, as this objective does not clearly encourage meaningful changes in data from the augmentation.\\n\\nA4. The reviewer may misunderstand how the contrastive objective facilitates our augmentation. Below, we provide clarifications to address the reviewer's misunderstanding. \\n\\nThe contrastive objective guides $\\\\mathbf{s}$ in FreRA to preserve semantics in critical components while introducing variance to unimportant components during augmentation, as we defined in Eq. (2), thereby enabling meaningful changes in data. \\n\\nThe function of contrastive objective in our augmentation is mainly to facilitate $\\\\mathbf{s}$ to decide the importance of frequency components. Previous work [1] has shown that the contrastive objective is capable of preserving critical information while eliminating random noise. This supports our use of the contrastive objective in learning the importance of frequency components to global semantics with $\\\\mathbf{s}$. Specifically, the contrastive objective encourages higher $s_i$ to critical components so that critical information can be preserved in the augmented view. This has been empirically verified in our visualization of $\\\\mathbf{s}$ from Figure 6. Therefore, the learned $\\\\mathbf{s}$ can conduct meaningful changes from two perspectives:\\n 1. identity modification on critical components with higher $s_i$ to keep semantics information intact\\n 2. self-adaptive modification on unimportant components with lower $s_i$ to introduce variance\\nTo summarize, the contrastive objective trains the vector $\\\\mathbf{s}$ to inform and drive the augmentation process.\\n\\n[1] Ji, Wenlong, et al. \\\"The power of contrast for feature learning: A theoretical analysis.\\\" Journal of Machine Learning Research 24.330 (2023): 1-78.\\n\\n> Q5. It is also unclear why S can be used to adaptively modify unimportant components, considering that it is not directly trained to do this. \\n\\nA5. The vector $\\\\mathbf{s}$ is trained to decide importance scores for frequency components. (The reviewer can refer to A6 of our earlier response 2/5 and A4 in this response regarding how $\\\\mathbf{s}$ is trained.) This enables $\\\\mathbf{s}$ to effectively distinguish between critical and non-critical components with an adaptive threshold, as explained in lines 342-350 in our revised paper. Additionally, for unimportant components, $\\\\mathbf{s}$ guides the self-adaptive modification module to apply stronger modifications to more irrelevant components, as explained in lines 351-355 in our revised paper.\\n\\n> Q6. Besides, this method may need more significant improvements to show its effectiveness.\\n\\nA6. Our method is a simple yet effective approach and can be applied in a plug-and-play manner. It has demonstrated strong empirical performance on a wide range of datasets in 3 settings, i.e., time series classification, anomaly detection, and transfer learning. Classic contrastive learning frameworks, such as BYOL[2] and SimCLR[3], are also effective approaches with simple designs. Therefore, we believe our current approach is effective enough to address existing challenges and it is less necessary to introduce extra design. \\n\\n[2] Grill, Jean-Bastien, et al. \\\"Bootstrap your own latent-a new approach to self-supervised learning.\\\" Advances in neural information processing systems 33 (2020): 21271-21284.\\n\\n[3] Chen, Ting, et al. \\\"A simple framework for contrastive learning of visual representations.\\\" International conference on machine learning. PMLR, 2020.\\n\\n> Q7. It is not very convincing to claim that FreRA does not outperform SoftCLT clearly because SoftCLT uses TS2Vec.\\n\\nA7. **The effectiveness of contrastive learning frameworks, such as SimCLR and TS2Vec, DOES make a significant difference across different datasets**, as discussed in prior works [4]. In the second table of our earlier response 4/5, we demonstrate that for the UEA and UCR datasets, SimCLR is less effective than TS2Vec. Consequently, it is challenging for a SimCLR-based approach to be comparable with a TS2Vec-based approach, i.e., SoftCLT. However, **FreRA compensates for the performance gap caused by the contrastive learning framework** and achieves comparable performance with SoftCLT. Moreover, **combining FreRA with SoftCLT further enhances the performance by up to 1.2%**, as shown in the last table of response 4/5. \\n\\n[4] Qian, Hangwei, Tian Tian, and Chunyan Miao. \\\"What makes good contrastive learning on small-scale wearable-based tasks?.\\\" Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 2022.\\n\\n\\nLastly, we sincerely hope our above explanations can alleviate the reviewer's concerns about our work. We look forward to any further feedback from the reviewer.\"}",
"{\"comment\": \"Thanks for your clarifications and providing more details, I will keep my score.\"}",
"{\"title\": \"Responses to Reviewer Qixi [2/5]\", \"comment\": \"> Q5. The overall novelty of the proposed augmentation method is limited. Augmentation from the frequency components is not a new idea for time series. The main difference is a trainable vector s to control the augmentation of different components.\\n\\nA5. In this work, we address a less-explored but practical and challenging problem, automatic augmentation for time series contrastive learning. We first figure out that **previous augmentation methods fail to preserve semantic integrity** in the augmented view, as illustrated in Figure 1 of our submission, and therefore limits the performance of downstream classification tasks. Based on this observation, we provide a novel perspective from the frequency domain to solve the above problem. The automatic nature of our approach makes it better than previous frequency-based augmentations in three aspects: \\n* FreRA eliminates the need for extensive parameter tuning and hand-picking, ensuring a more **efficient** augmentation process.\\n* FreRA is deliberately designed to **fully leverage the global, independent, and compact properties of the frequency domain**. It is a unified augmentation that conducts transformation aligned to the inherent semantic distribution of time series instead of stochastic perturbation. The generated views thus **preserve critical semantic information** and are more **effective** for representation learning.\\n* FreRA is designed in a plug-and-use manner, enabling seamless integration with different contrastive learning frameworks. It is consistently beneficial for a variety of contrastive learning frameworks. The empirical comparisons are listed in A4 to reviewer cXW7. \\n\\n> Q6. It is unclear why s trained from Equation (7) can learn to select critical components automatically.\\n\\nA6. We thank the reviewer for raising this important question. The short answer is: $\\\\mathbf{s}$ assigns higher values to critical components. This is the joint result of the maximum agreement objective between the two views and the compactness constraint regularizing the L1-norm of $\\\\mathbf{w}_\\\\text{crit}$.\", \"the_learning_objective_for_frera_consists_of_two_components\": \"1) the contrastive loss and 2) the L1-norm regularization. Optimizing the augmentation with the contrastive loss makes it generate views similar to their corresponding anchors while remaining distinct from other instances, encouraging maximum agreement between the two views. The L1-norm regularization prevents the trivial solution where no changes or minor changes happen to the data, as we explained in lines 371-382 in the revised paper. The overall objective allows **minimal but necessary** critical frequency components to be included in the generated views. As a result, $w_\\\\text{crit}^i$ for critical components are driven toward high values near 1. This differentiates the value of $s_i$, where $w_\\\\text{crit}^i$ is derived from, for critical and unimportant components. The value distinctions in $\\\\mathbf{s}$ allow it to adaptively identify the most semantically relevant components.\\n\\n Moreover, Figure 6 in our revised manuscript empirically verifies that $s$ learns to assign higher values to the most semantically relevant components and therefore facilitate critical component selection.\"}",
"{\"title\": \"Responses to Reviewer Qixi [3/5]\", \"comment\": \"> Q7. How FreRA achieves both semantic-preserving information and a considerable amount of variance and which designs correspond to these two sides respectively.\\n\\nA7. Semantic-preserving information and a considerable amount of variance correspond to $\\\\mathbf{w}\\\\_{\\\\text{crit}} \\\\odot x_f$ and $\\\\mathbf{w}\\\\_\\\\text{dist} \\\\odot x_f$ in the Eq. (2) of our manuscript, respectively. They are achieved through the corresponding modification vectors $\\\\mathbf{w}\\\\_{\\\\text{crit}}$ and $\\\\mathbf{w}_\\\\text{dist}$, which are derived from the identity modification module and self-adaptive modification module, respectively. The details are presented below.\\n\\n1. Semantic-preserving information is achieved by the **identity modification module on critical frequency components**. Specifically, we learn a lightweight trainable parameter vector $\\\\mathbf{s}$ to capture the inherent semantic distribution in the frequency domain. The semantic-preserving information is preserved by the critical frequency components identified by $\\\\mathbf{s}$.\\n \\n2. A considerable amount of variance is achieved by the **self-adaptive modification module on unimportant frequency components**. In the vector $\\\\mathbf{s}$, the value of each element $s_i$ indicates the importance of the $i$-th frequency component $x_f^i$ for the global semantics, as defined in lines 321-322 in the revised paper. Therefore, frequency components with smaller values of $s_i$ are considered unimportant. To infuse variance, we apply perturbation to those unimportant components. Disturbing them does not affect the semantics of the time series because they are independent of the critical frequency components. The selection of unimportant components and the strength of perturbation on each one of them are designed to be adaptive to the input time series dataset. This ensures the amount of variance infused is appropriate. \\n\\n\\n> Q8. The self-adaptive modification seems simple and tricky. It only uses the vector s and threshold to select and scale unimportant frequency components. The motivation for scaling these components is unclear.\\n\\nA8. The motivation for scaling these components is to infuse variance into the augmented view. Previous studies [3-4] have shown that sufficient variance or task-irrelevant noise can improve the performance of contrastive learning models. \\n \\n However, they achieve this by adding random noise in the entire time domain, which introduces variance while inevitably interfering with the critical information. In contrast, FreRA avoids this issue by **selectively** adding variance to the well-separated unimportant frequency components. This ensures the **critical semantics are well preserved**. Moreover, the global property of the frequency components ensures that **all timestamps are altered** with distortion applied only to unimportant components.\\n \\n The self-adaptive modification is simple to implement in practice. However, it is **deliberately designed** to **adaptively** add variance to the augmented view. The adaptive nature of the self-adaptive modification is reflected in two aspects: \\n1. Adaptive unimportant component selection: Instead of handpicking a threshold value to separate the unimportant components from the rest, we determine the value with statistical information of the vector $\\\\mathbf{s}$. Specifically, we use the mean value of $\\\\mathbf{s}$ as the threshold and $D = \\\\{i | s_i < \\\\min(0, \\\\frac{1}{F} \\\\sum_{i=1}^{F}{s_i})\\\\}$ to denote the set of unimportant components' indices, as explained in lines 344-350 in the revised paper.\\n2. Adaptive strength of modification of each unimportant component: To adjust the degree of distortion according to the irrelevance of each frequency component, a scaling factor $\\\\delta_s = \\\\frac{1}{\\\\lvert D \\\\rvert} \\\\sum_{i=1}^{F} \\\\mathbb{1}_{\\\\{i \\\\in D\\\\}} \\\\lvert s_i \\\\rvert$ is applied to the unimportant components. This ensures the least important frequency component gets amplified mostly in the distortion step, as explained in lines 351-355 in the revised paper. Its effectiveness has been empirically verified, as detailed in our response to Q10.\\n\\nDespite the simple but meticulous design, the effectiveness of the self-adaptive modification module has been verified by the result of the ablation study presented in Table 3. It improves the average accuracy of the UEA archive from 0.695 to 0.754, with 0.059 absolute improvement. \\n \\n[3] Luo, Dongsheng, et al. \\\"Time series contrastive learning with information-aware augmentations.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 4. 2023.\\n\\n[4] Zheng, Xu, et al. \\\"Parametric Augmentation for Time Series Contrastive Learning.\\\" arXiv preprint arXiv:2402.10434 (2024).\"}",
"{\"title\": \"Kind reminder for discussion\", \"comment\": \"Dear Reviewer GiHe,\\n\\nWe would like to thank you for your valuable suggestions on enhancing our work. We have provided a response to address your concerns and revised our paper accordingly. We are happy to have further discussions if you have any outstanding concerns or questions.\\n\\nBest,\\n\\nPaper 6561 Authors\"}",
"{\"title\": \"Kind reminder for discussion\", \"comment\": \"Dear Reviewer 7F2V,\\n\\nWe would like to express our gratitude for your constructive comments and questions. We look forward to further discussions if you have any remaining questions. \\n\\nBest,\\n\\nPaper 6561 Authors\"}",
"{\"title\": \"Responses to Reviewer cXW7 [1/3]\", \"comment\": \"We thank the reviewer for the constructive comments and valuable feedback. We address your concerns below.\\n\\n> Q1. The importance distinction of this method is mostly for the entire time series, and it could be better to compare it with other methods and analyze the theoretical computational complexity.\\n\\nA1. We have included comparisons with key baseline methods in the current version, including (1) 11 commonly used handcrafted time-domain augmentations (2) 5 handcrafted frequency-domain augmentations (3) 3 SOTA automatic augmentation for contrastive learning, and (4) 5 SOTA time series contrastive learning frameworks. These baselines also operate on the entire time series and are strong benchmarks for time series classification tasks. **The uniqueness of our FreRA lies in its superior ability to preserve global semantics from the entire time series**, as illustrated in Figure 1 and the results.\", \"the_overall_computational_complexity_involves_three_components_as_explained_below\": \"1. The **transformation** involves two parts:\\n - The frequency-domain augmentation involves a Fourier Transform and an inverse Fourier Transform. Implemented by the Fast Fourier Transform (FFT), the computational complexity is $\\\\mathcal{O}(L \\\\log L)$, where $L$ is the sequence length.\\n - Based on the learned $\\\\mathbf{s}$, the identity modification module and the self-adaptive modification module conduct element-wise operations and introduce computation complexity of $\\\\mathcal{O}(F)$.\\n2. **Update of trainable parameters**. The augmentation function is **parameterized** by a lightweight vector $\\\\mathbf{s} \\\\in \\\\mathbb{R}^{F}$, where $F=\\\\lfloor{L/2}\\\\rfloor+1$ is the number of frequency components. The computational complexity of updating $\\\\mathbf{s}$ is $\\\\mathcal{O}(F)$.\\n3. The **auxiliary loss** introduced by the augmentation. $\\\\mathbf{w}_\\\\text{crit}$ is regularized by the L1-norm. The computational complexity introduced by the loss term is $\\\\mathcal{O}(F)$. \\n\\nThe overall computational complexity is dominated by the Fourier Transform and its inverse. Hence, the overall complexity for FreRA can be approximated as $\\\\mathcal{O}(L \\\\log L)$. \\n\\nIn addition, we analyze the computational complexities of the other three SOTA automatic augmentations are present them in the table below. The analysis accounts for per-instance complexity. $B$ and $d$ denote the batch size and feature dimension respectively. \\n\\n| |FreRA| InfoMin$^+$ |InfoTS|AutoTCL|\\n|-|-|-|-|-|\\n|transformation|$\\\\mathcal{O}(L\\\\log L)+\\\\mathcal{O}(F)$|$\\\\mathcal{O}(L\\\\log L)+\\\\mathcal{O}(F)$| $\\\\mathcal{O}(7L)$ (7 time-domain augmentations) | $\\\\mathcal{O}(dL) + \\\\mathcal{O}(L)$ (timestamp-level factorization) |\\n| trianable parameter update | $\\\\mathcal{O}(F)$ | $\\\\mathcal{O}(F)$ | $\\\\mathcal{O}(7)$ (weight of 7 candidate augmentations) | $\\\\mathcal{O}(dL)$ (parameters in the factorization and transform functions) |\\n| auxiliary loss function | $\\\\mathcal{O}(F)$ (L1-norm) | $\\\\mathcal{O}(Bd)$ (InfoNCE) | $\\\\mathcal{O}(Bd)$ (InfoNCE) | $\\\\mathcal{O}(Bd)$ (MMD) |\\n| Overall | $\\\\mathcal{O}(L \\\\log L)$ | $\\\\mathcal{O}(L \\\\log L + Bd)$ | $\\\\mathcal{O}(L + Bd)$ | $\\\\mathcal{O}(L + dL + Bd)$ |\\n\\nThe overall computational complexity of InfoMin$^+$ clearly dominates FreRA. The efficiency of the other three methods depends heavily on the setting of hyper-parameters $B$ and $d$. FreRA consistently achieves competitive performance without imposing significant computational burden, making it a superior option for practical applications.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Thanks to the authors for the efforts to address my comments. My questions have been clarified and I remain my view on this paper.\"}",
"{\"title\": \"Responses to Reviewer GiHe\", \"comment\": \"We deeply appreciate the valuable feedback from the reviewer. We would like to address your concerns below.\\n\\n> Q1. At the beginning of the paper, the authors make strong assumptions that existing predefined augmentation methods are primarily adopted from vision and are not specific to time series data. There are already several methods, especially frequency-based augmentation, e.g., TF-C, method design for time series contrastive learning.\\n\\nA1. We thank the reviewer for pointing out this important point. We are aware that existing pre-defined augmentations include time-domain and frequency-domain augmentations. Time-domain augmentations are mostly adopted from the vision domain. They often introduce mismatched patterns into the data because they do not account for the intrinsic characteristic of time series. On the other hand, frequency-based augmentations, such as high-pass and low-pass filters, require prior knowledge of the dataset to determine the selection of appropriate augmentation functions. Moreover, other stochastic frequency-domain augmentations, such as the frequency-based augmentation in TF-C, introduce random noise that can interfere with the critical information.\", \"the_challenges_of_existing_predefined_augmentations_can_be_summarized_as_follows\": \"- They often fail to consider the intrinsic characteristics of time series data, resulting in mismatched patterns due to stochastic perturbations.\\n - Certain augmentations require prior knowledge of the dataset, which is not always accessible in the contrastive learning paradigm.\\n - The wide range of possible augmentation functions requires extensive trials and errors to select the optimal one, making the augmentation process costly and less practical.\\n\\nThe sentence referenced by the reviewer was intended to highlight the problems with existing predefined time-domain augmentations and introduce the novel perspective from the frequency domain. We have revised the sentence to make it more precise and accurate. Moreover, we have included a detailed discussion of the frequency-based augmentations and their problems in Appendix A.2. \\n \\n> Q2. Since the paper mainly provides the frequency-based augmentation, the motivation study, such as Figure.1 probably should highlight more about whether current frequency-based method can capture the semantics, rather than only focus on the time-domain,\\n\\nA2. We thank the reviewer for the valuable suggestion. We add the pre-defined frequency-domain augmentation 'amplitude-and-phase-perturbation' denoted as $\\\\mathcal{T}_f(\\\\textsf{x})$ in Figure 1 with explanations updated in our manuscript. It still fails to preserve semantic integrity in the augmented view as we observe from its low mutual information (MI) values. This further explains the limitation of existing augmentations in capturing semantics and highlights the importance of our FreRA.\\n\\nWe deeply thank the reviewer for raising insightful comments. We sincerely hope our clarifications above have addressed your concerns and can improve your opinion of our work.\"}",
"{\"summary\": \"The paper proposes a method, FreRA, to enhance time series classification by contrastive learning and sample augmentations. First, they considered the frequency domain of the time series. FreRA automatically separates the critical and unimportant frequency components. They proposed Identity Modification and Self-adaptive Modification to protect the global semantics in the critical frequency components and inject variance into the unimportant components, respectively. Extensive experimental results on several datasets show that FreRA outperforms existing methods in terms of accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Overall, this paper is well-written and easy to follow.\\n2. The problem studied is significant, and exploring augmentation in time series is novel.\\n3. Extensive experimental results are promising.\", \"weaknesses\": \"1. The importance distinction of this method is mostly for the entire time series, and it could be better to compare it with other methods and analyze the theoretical computational complexity.\\n2. Although frequency methods can improve efficiency, it is unclear whether such methods mainly focus on the low-frequency part and ignore the high-frequency part which is more important for time series prediction.\\n3. Do the authors consider the dependencies between channels, which is very significant for multivariate time series.\\n4. The authors claim that FreRA can be benefited by any contrastive learning framework, but only show the results of InfoNCE. What about other CL paradigms, such as SimCLR, etc.? It could be better to present more sufficient ablation.\\n5. The experimental results are selected from the highest performances among 11 time-domain augmentations and 5 frequency-domain augmentations. Is this fair enough? There seems to be randomness with such selection strategy.\\n6. The results of the impacts of hyper-parameters could be moved to the main paper for a better organization.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for the rebuttal. Some of my concerns have been addressed.\\n\\nI think the technical novelty of the method, which is mainly based on the trainable vector S is still a little limited. Although the vector S can be automatically trained, it still needs to control the trade-off between the contrastive objective and the regularization term to balance huge semantic changes and trivial solutions of no changes. Considering this, the idea of this method seems similar to simply controlling the strengths when using augmentations. Currently, the critical components are a little abstract to me. It may need more analysis to show their properties and why they are critical. Furthermore, using the contrastive objective to learn data augmentation is not reasonable enough for me, as this objective does not clearly encourage meaningful changes in data from the augmentation. It is also unclear why S can be used to adaptively modify unimportant components, considering that it is not directly trained to do this. Besides, this method may need more significant improvements to show its effectiveness. It is not very convincing to claim that FreRA does not outperform SoftCLT clearly because SoftCLT uses TS2Vec.\"}",
"{\"title\": \"Responses to Reviewer Qixi [5/5]\", \"comment\": \"> Q10. The ablation study is coarse, and some important variants are missing. For example, modifying all (or randomly selected) frequency components and modifying unimportant components randomly.\\n\\nA10. We thank the valuable advice from the reviewer. We have included a more comprehensive ablation study considering three more variants of infusing variance into the frequency domain:\\n\\n- Modifying **all** frequency components with stochastic perturbation.\\n- Modifying **randomly selected** frequency components with stochastic perturbation.\\n- Modifying **unimportant** frequency components with stochastic perturbation.\\n\\n| | FreRA | perturbation (all) | perturbation (random) | perturbation (unimportant) |\\n|----------|:---------:|:------------------:|:---------------------:|:--------------------------:|\\n| Avg. ACC | **0.754** | 0.642 (-0.112) | 0.651 (-0.103) | 0.703 (-0.051) |\\n\\nThe results averaged over 30 datasets from the UEA archive are presented in the table above. The performance drop from the third setting highlights the effectiveness of self-adaptive modification applied to unimportant components, as compared to stochastic perturbation. However, randomly disrupting the unimportant components is still better than directly removing them from the generated view (\\\"w/o modification on noise components\\\" in Table 3). The first two settings demonstrate the importance of isolating critical components when introducing noise. Modifying all or random frequency components inevitably interferes with the critical components, which damages the semantic information in the generated views and leads to degraded performance. Randomly perturbing all frequency components results in larger performance drops.\\n\\n> Q11. As this paper focuses on time series classification, why is it also evaluated in anomaly detection?\\n\\nA11. In this work, we treat anomaly detection as a 3-class classification problem. Notably, the **class distribution is highly imbalanced** (the class distribution follows 1240:6200:6200) in the anomaly detection dataset. The evaluation results in the anomaly detection tasks further verify the **effectiveness** of our method in a more difficult problem and demonstrate its **generalizability** across different tasks beyond standard time series classification. \\n\\n> Q12. Figure 2 is hard to read. The different colors for vector blocks in FreRA are confusing.\\n\\nA12. We thank the reviewer for sharing the feedback on Figure 2. We have made the following amendments in our revised manuscript to make the figure more clear and readable. \\n1. Update the legend to clarify the use of colors for the vector blocks.\\n2. Add explanation to the colored blocks in $\\\\mathbf{w}_\\\\text{dist}$ in the caption. \\n\\nThe use of different colors for the vector $\\\\mathbf{s}$ is intended to indicate that $\\\\mathbf{s}$ learns to assign different levels of importance to the frequency component. The colored blocks in $\\\\mathbf{w}\\\\_\\\\text{dist}$ are to illustrate adaptive distortion strength on unimportant frequency components ($\\\\mathbf{w}\\\\_\\\\text{dist}$ has matching colors with $\\\\mathbf{s}$ on the positions of unimportant components). \\n\\n\\nWe deeply thank the reviewer for raising important questions. We sincerely hope our clarifications above have addressed your questions and concerns and can improve your opinion of our work.\"}",
"{\"title\": \"Further Responses to Reviewer Qixi [1/2]\", \"comment\": \"We thank the reviewer for the feedback. We would like to reply to your further concerns below.\\n\\n> Q1. I think the technical novelty of the method, which is mainly based on the trainable vector S is still a little limited\\n\\nA1. We would like to further clarify the technical novelty of our method. While the trainable vector $\\\\mathbf{s}$ is a core component, its value lies not only in its adaptive nature but also in **how it informs and drives the augmentation process**. Specifically, as $\\\\mathbf{s}$ indicates the importance of frequency components, without introducing extra designs, a single $\\\\mathbf{s}$ adeptly guides (1) separation between critical and non-critical components and (2) modifications preserving critical information while introducing variance. \\n\\nWe do not hold the belief that a simple yet effective method has limited novelty and contribution. **Despite the lightweight design, FreRA is an effective approach elegantly addressing a nontrivial problem, which can be applied in a plug-and-play manner to a wide range of contrastive learning frameworks.**\\n\\n> Q2. Although the vector S can be automatically trained, it still needs to control the trade-off between the contrastive objective and the regularization term to balance huge semantic changes and trivial solutions of no changes. Considering this, the idea of this method seems similar to simply controlling the strengths when using augmentations.\\n\\nA2. It is worth noting that existing automatic augmentations for time series contrastive learning, i.e., InfoTS and AutoTCL, both rely on trade-off hyper-parameters to balance the loss terms and **this is not unique to FreRA**. Specifically, InfoTS uses **two** trade-off hyperparameters and AutoTCL uses **three**. Compared to them, our FreRA relies only on a **single** trade-off hyper-parameter $\\\\lambda$, which **significantly reduces the cost and complexity of hyper-parameter tuning**.\\n\\nMoreover, as compared to predefined augmentations, FreRA offers a substantial advantage by **alleviating the trials and errors in selecting both the optimal transformation function and the optimal augmentation strength**. This **saves a significant amount of time and resources** in hyper-parameter tuning. \\n\\nBeyond the efficiency in hyper-parameter tuning, FreRA has demonstrated **robust performance towards the selection of hyper-parameter $\\\\lambda$**, as shown in our ablation study on the sensitivity of $\\\\lambda$. Specifically, the results indicate that the performance of FreRA remains stable across different values of $\\\\lambda$ and outperforms the second-best baselines.\\n\\n> Q3. Currently, the critical components are a little abstract to me. It may need more analysis to show their properties and why they are critical.\\n\\nA3. Intuitively, critical components refer to the frequency components that are **most relevant to the labels in the downstream task**. For example, in the Libras dataset, which records hand movements for sign language, the low-frequency components of the recorded signal are critical. This is because hand movements encoding sign language are smooth and include gradual changes over time, making the low-frequency components most relevant for capturing the global semantics of the gestures. In contrast, high-frequency components mostly represent noise, sensor artifacts, or random fluctuations, which contribute little to the global semantics of hand movements. Similarly, in the Epilepsy dataset recording wrist activities, part of critical information often resides in higher frequency components. This is because convulsions often happen to people with epilepsy when performing activities, generating high-frequency signals in sensor readings. As a result, the higher frequency components act as a part of critical features.\\n\\nThis analysis is supported by our quantitative measurement, i.e., the mutual information between the frequency components and the ground-truth label, shown in the bar plots of Figure 6. For the Libras dataset (on the left), the plot demonstrates that low-frequency components exhibit higher mutual information with the labels, and thus act as critical components for downstream tasks. Conversely, for the Epilepsy dataset (on the right), some higher frequency components demonstrate larger mutual information, indicating their importance for downstream tasks.\"}",
"{\"title\": \"Responses to Reviewer 7F2V [2/2]\", \"comment\": \"> Q4. What did the transfer learning experiment aim to prove?\\n\\nA4. The transfer learning experiment aimed to evaluate whether representation learning with FreRA could effectively transfer useful knowledge to data from unseen domains that have large domain gaps with the training data. The result demonstrates that FreRA achieves **stronger transferability** compared to other baselines. This is attributed to FreRA's ability to infuse variance into the augmented view while leaving critical semantics intact. The encoder thus learns representations that capture the inherent semantics and disregard environment noise. FreRA therefore helps to reduce the domain gap through enhanced augmentation. It further verifies the effectiveness of FreRA.\\n\\n> Q5. If all three modules were removed, what would be the resulting performance?\\n\\nA5. If all three modules were removed, the model would downgrade to a vanilla contrastive learning framework where both views are the original input. We present the results averaged over 30 datasets from the UEA archive in the table below. The significant performance drop when all three modules are removed (the fourth column) indicates removing the augmentation leads to collapsed representation learning. To remove all three modules and avoid collapsed training, a simple way is to apply basic augmentations such as pre-defined time-domain and frequency-domain augmentations. The results are presented in the last two columns \\n\\n| |FreRA|w/o modification on critical components|w/o modification on noise components| w/o L1 regularization | w/o all three components|best(T)|best (F)|\\n|--|:--:|:--:|:--:|:--:|:--:|--|--|\\n| Avg. ACC | **0.754** |0.690 (-0.064)|0.695 (-0.059)|0.690 (-0.064| 0.615 (-0.139)|0.684 (-0.070)|0.686 (-0.068)|\\n\\n> Q6. If FreRA were integrated into softCLT, would there be any gain in performance?\\n\\nA6. We thank the reviewer for the valuable question. We integrate FreRA into SoftCLT and observe a gain in performances on the three large HAR datasets, as shown in the table below. These results further validate FreRA\\u2019s flexibility and effectiveness when applied to other contrastive learning frameworks.\\n\\n|Dataset|SoftCLT + FreRA|SoftCLT + original augmentation|\\n|:--:|:--:|:--:|\\n|UCIHAR|**0.969**|0.961|\\n|MS|**0.974**|0.962|\\n|WISDM|**0.956**|0.952|\\n\\nWe deeply thank the reviewer for raising important questions. We sincerely hope our clarifications above have addressed your concerns and can improve your opinion of our work.\"}",
"{\"title\": \"General Response\", \"comment\": \"Dear ACs and Reviewers,\\n\\nWe sincerely appreciate your time and effort in reviewing our work and providing constructive feedback. We would like to 1) express our gratitude for reviewers\\u2019 recognition of our work, and 2) highlight the major modifications made in our revised paper.\\n\\n**We thank the reviewers for recognizing and appreciating the advantages of our work.**\\n\\n* Investigating automatic augmentation in time series is **significant, novel and well-motivated**. [cXW7,Qixi,7F2V,GiHe]\\n* The proposed methodology is **clear, interesting, novel, and easy to follow and implement**. [7F2V,GiHe,Qixi]\\n* The proposed FreRA is a **plug-and-play** method that can be integrated seamlessly with existing contrastive learning frameworks.[7F2V]\\n* **Extensive experimental results** are **promsing or strong**. [cXW7, Qixi, GiHe]\\n* The paper is **well-written, well-organized and easy to follow**. [cXW7,7F2V]\\n\\nBesides the response to each reviewer, we would like to summarize the **major modifications made in our revised paper (highlighted in blue)**:\\n\\n1. **More evaluations on different contrastive learning frameworks.** We expand the evaluation of FreRA on 3 additional contrastive learning frameworks. The results reported in Table 5 and Table 6 of Appendix A.9 demonstrate that FreRA is a **plug-and-play** method and it **consistently and effectively enhances existing contrastive learning frameworks**. [cXW7,7F2V]\\n\\n2. **Visualization and analysis on the learned vector $\\\\mathbf{s}$.** We visualize the learned vector $\\\\mathbf{s}$ and demonstrate its ability to **capture the diverse distributions of critical semantics** across three different datasets. The visualization is provided in Figure 6, with explanations in Appendix A.9. [cXW7,Qixi]\\n\\n3. **More discussion and visualization on predefined frequency-domain augmentations.**\\n \\n * We revise the statements regarding our motivation for investigating the frequency domain in the abstract to make it precise and accurate. Additionally, we include a detailed discussion on frequency-based augmentations and their limitations in Appendix A.2. [GiHe]\\n * We include the predefined frequency-domain augmentation in Figure 1 to illustrate it also causes undermined semantics in the generated views. Corresponding explanations are updated accordingly. [GiHe]\\n\\n4. **Presentation enhancements.**\\n\\n * We update the legend\\u00a0and caption in Figure 2 to clarify the use of colors for the vector blocks. [Qixi]\\n * We move the ablation study on the impact of hyper-parameter from the appendix to the main paper for better organization. [cXW7]\\n\\n\\nThank you once again for taking the precious time to review our work. We would be delighted to engage in further discussions if you have any remaining questions or concerns.\\n\\nBest regards,\\n\\nPaper 6561 Authors\"}",
"{\"metareview\": \"This paper presents a new augmentation method named Frequency-Refined Augmentation (FreRA) for time series contrastive learning and classification. Reviewers agreed that the paper is well written and easy to follow, the method is well motivated, and the experiments are extensive. Meanwhile, reviewers pointed out that there are still some limitations regarding technical contribution, details on methodology, experiments, novelty, etc. Although some of these concerns have been addressed during the rebuttal and discussion stage, some issues still remain. For instance, the novelty of the proposed method is not significant, and the advantages of FreRA over existing work are not sufficiently justified. Overall, this is a borderline paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised many concerns regarding technical contribution, details on methodology, experiments, novelty, etc. The authors provided detailed responses and additional results, which have addressed some of these concerns. However, during the post-rebuttal discussions, the concerns on novelty and technical contributions still remain.\"}",
"{\"title\": \"Kind reminder for discussion\", \"comment\": \"Dear Reviewer Qixi,\\n\\nWe would like to express our gratitude for your valuable questions and suggestions. We have provided point-by-point replies to your concerns. We are wondering if our response has properly addressed your concerns. We look forward to discussions with you if you have any outstanding concerns or questions.\\n\\nBest,\\n\\nPaper 6561 Authors\"}",
"{\"title\": \"Responses to Reviewer Qixi [1/5]\", \"comment\": \"We thank the reviewer for the time to review our work and offering valuable comments. We address your questions below.\\n\\n> Q1. clarify important terms\\n\\nA1. We clarify the important terms as follows.\\n - \\\"Semantic integrity\\\" refers to the **preservation of the meaningful features within the time series that are essential for the downstream classification tasks**, as we explained in lines 65-69 in the revised paper. It is opposed to 'semantic degradation'. In math, it is defined as the state of augmented view $\\\\mathsf{v^\\\\ast}$ when $\\\\text{MI}(\\\\mathsf{v^\\\\ast},\\\\mathsf{y}) = \\\\text{MI}(\\\\mathsf{x},\\\\mathsf{y})$, where $\\\\mathsf{x}$ and $\\\\mathsf{y}$ are the random variables denoting time series sample and the label, and $\\\\text{MI}$ represents mutual information.\\n\\n - \\\"Critical frequency components\\\" refers to the frequency components that **contain substantial information related to the downstream classification task**, such as those containing key recurring patterns in the time series. Conversely, \\\"unimportant frequency components\\\" are those that **contribute minimally to classification tasks**, such as noise from the environment. \\n\\n> Q2. Why can we measure semantic integrity using mutual information? Is this consistent with humans\\u2019 understanding of semantics?\\n\\nA2. From the definition of mutual information, the value of $\\\\text{MI}(\\\\mathsf{v^\\\\ast},\\\\mathsf{y})$ quantifies the amount of information the augmented view $\\\\mathsf{v^\\\\ast}$ can provide about the label. Therefore, it measures the extent to which critical semantics relevant to $\\\\mathsf{y}$ are preserved, which is the definition of semantic integrity as we explained above. \\n\\nUnlike images and natural language, the semantics of time series are **not intuitively recognizable for human understanding**. In other words, given a time series signal, humans\\u2019 understanding of its semantics is ambiguous and vague. Alternatively, by quantitatively measuring the mutual dependency between the time series and its semantics, **the mutual information is superior to humans' understanding in reflecting the completeness of unintuitive semantics**. Moreover, **extensive works [1-3] have applied mutual information to measure the amount of task-relevant semantics information**, which supports its usage in our work. \\n\\n[1] Oord, Aaron van den, Yazhe Li, and Oriol Vinyals. \\\"Representation learning with contrastive predictive coding.\\\" arXiv preprint arXiv:1807.03748 (2018).\\n\\n[2] Tian, Yonglong, et al. \\\"What makes for good views for contrastive learning?.\\\" Advances in neural information processing systems 33 (2020): 6827-6839.\\n\\n[3] Luo, Dongsheng, et al. \\\"Time series contrastive learning with information-aware augmentations.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 4. 2023.\\n\\n> A3. How can we measure the importance of frequency components?\\n\\nA3. Similar to how we quantify the important information in the time domain, we use mutual information to measure the importance of frequency components. Since the frequency components are complex numbers, we calculate MI using only their real parts for compatibility. In Figure 6 of our revised manuscript, we visualize the MI of 3 datasets (Libras, ArticularyWordRecognition, and Epilepsy) by blue-grey bar plots. The important components (those with high MI values with labels) are distributed in low frequencies, middle frequencies, and across multiple frequencies in these datasets, respectively, showing diverse distribution. Despite the diversity, the learned vector $\\\\mathbf{s}$, plotted in orange lines, consistently captures the inherent critical information by learning to assign higher values to the most semantically relevant frequency components. This further verifies the effectiveness and the generalization of FreRA across diverse distributions. \\n\\n> Q4. What are these critical components critical for?\\n\\nA4. Intuitively, the critical components are critical for the downstream tasks as they contain task-related information. In practice, preserving the critical components in the augmented view allows the learned representations to have good performance in the downstream classification tasks.\"}",
"{\"title\": \"Responses to Reviewer cXW7 [3/3]\", \"comment\": \"> Q4. The authors claim that FreRA can be benefited by any contrastive learning framework, but only show the results of InfoNCE. What about other CL paradigms, such as SimCLR, etc.? It could be better to present more sufficient ablation.\\n\\nA4. We thank the reviewer for the insightful comment. In our submission, we apply the architecture of the widely used SimCLR with InfoNCE as the contrastive learning framework. The reason we use InfoNCE instead of NT-Xent, as originally applied in SimCLR, is the better empirical performance, as shown in the table (rows 7-12) below. The same usage has been deployed in [1-2] as well. Moreover, we provide an ablation study evaluating FreRA on alternative contrastive learning frameworks, including TS2Vec, TS-TCC and BYOL, in the Appendix. \\n\\nFollowing the suggestions from the reviewer, we have further expanded our evaluation to include additional contrastive learning frameworks: the SimCLR architecture with NT-Xent as contrastive loss functions, as well as an advanced contrastive learning framework SoftCLT. Our current evaluation covers **5 contrastive learning frameworks** and **3 types of contrastive loss functions**. It is worth noting that the contrastive losses used in TS-TCC, TS2Vec, and SoftCLT are different variants of InfoNCE, each with its unique formulation. The results presented below consistently demonstrate that FreRA is a **plug-and-play** method that **consistently and effectively enhances existing contrastive learning frameworks**. \\n\\n| | Augmentation + CL framework (contrastive loss) |UCIHAR| MS |WISDM|\\n| -- |:-- |:--:|:--:|:--:|\\n| 1 | FreRA + TS2Vec (InfoNCE) |**0.970**|**0.968**|**0.957**|\\n| 2 | original TS2Vec (InfoNCE) | 0.959 | 0.945 | 0.939 |\\n| 3 | FreRA + TS-TCC (InfoNCE) |**0.944**|**0.959**|**0.962**|\\n| 4 | original TS-TCC (InfoNCE) | 0.924 | 0.915 | 0.889 |\\n| 5 | FreRA + SoftCLT (InfoNCE) |**0.969**|**0.974**|**0.956**|\\n| 6 | original SoftCLT (InfoNCE) | 0.961 | 0.962 | 0.952 |\\n| 7 | FreRA + SimCLR (InfoNCE) |**0.975**|**0.982**|**0.972**|\\n| 8 | best(T) + SimCLR (InfoNCE) | 0.959 | 0.956 | 0.942 |\\n| 9 | best(F) + SimCLR (InfoNCE) | 0.960 | 0.970 | 0.950 |\\n| 10 | FreRA + SimCLR (NT-Xent) |**0.972**|**0.979**|**0.966**|\\n| 11 | best(T)+SimCLR (NT-Xent) | 0.951 | 0.969 | 0.941 |\\n| 12 | best(F) + SimCLR (NT-Xent) | 0.955 | 0.965 | 0.952 |\\n| 13 | FreRA + BYOL (Cosine Similarity) |**0.960**|**0.983**|**0.952**|\\n| 14 | best(T) + BYOL (Cosine Similarity)| 0.940 | 0.968 | 0.942 |\\n| 15 | best(F) + BYOL (Cosine Similarity)| 0.937 | 0.954 | 0.928 |\\n\\n[1] Yeh, Chun-Hsiao, et al. \\\"Decoupled contrastive learning.\\\" European conference on computer vision. Cham: Springer Nature Switzerland, 2022.\\n\\n[2] Wu, Junkang, et al. \\\"Understanding contrastive learning via distributionally robust optimization.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n> Q5. The experimental results are selected from the highest performances among 11 time-domain augmentations and 5 frequency-domain augmentations. Is this fair enough? There seems to be randomness with such selection strategy.\\n\\nA5. The highest performances we present are the **upper limits of pre-defined augmentations**. Such a selection strategy, listing only the highest results, also intends to ensure a **simple and concise** tabular presentation **without randomness**. \\n\\nThe selection of the optimal pre-defined augmentation is data-specific and there is no unified pre-defined augmentation that works well on all the datasets. Even if we report the best empirical results of the optimal augmentations selected from exhaustive trials and errors on each dataset, they are still worse than the results of our FreRA. In this context, the comparison is fair enough, as the selection strategy aims to find out the augmentations that empirically best suit the given dataset. This comparison again highlights the effectiveness of FreRA. \\n\\n> Q6. The results of the impacts of hyper-parameters could be moved to the main paper for a better organization.\\n\\nA6. We thank the reviewer for the suggestion. We have included the ablation study on the impacts of hyper-parameters in the main paper accordingly. Due to the space limit, the plot and the detailed analysis remain in the Appendix, but the main conclusions of this ablation study are accessible from the main paper. \\n\\nWe deeply thank the reviewer for raising insightful comments. We sincerely hope our clarifications above have addressed your concerns and can improve your opinion of our work.\"}",
"{\"summary\": \"The paper introduces a novel augmentation technique designed for time-series contrastive learning by leveraging frequency-domain properties. It utilized the idea of FFT which separates time-series data into critical and non-critical frequency components.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The method utilizes the connection between frequency domain knowledge and semantic information to enhance the representation learning. The critical components capture global semantics essential for classification, while non-critical components are used for self-adaptive noise injection, which I found is an interesting link and the authors provide a comprehensive explanation for the motivation.\\nThe authors provide extensive experiments and strong experiment results to demonstrate their method's effectiveness.\", \"weaknesses\": \"1. At the beginning of the paper, the authors make strong assumptions that existing predefined augmentation methods are primarily adopted from vision and are not specific to time series data. There are already several methods, especially frequency-based augmentation, e.g., TF-C, method design for time series contrastive learning.\\n2. Since the paper mainly provides the frequency-based augmentation, the motivation study, such as Figure.1 probably should highlight more about whether current frequency-based method can capture the semantics, rather than only focus on the time-domain,\", \"questions\": \"/\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This study introduces Frequency-Refined Augmentation (FreRA), a method designed to overcome limitations in current augmentation strategies for time series classification in contrastive learning. Unlike existing visual-based augmentations, FreRA leverages three key advantages of the frequency domain properties to better preserve the global semantics of time series data. FreRA automatically segregates these components, applying Identity Modification to preserve vital details and Self-adaptive Modification to add variance to less significant parts. Theoretical proofs and empirical evaluations confirm FreRA's superiority, showing it outperforms ten leading baselines across various time series tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well-written and well-organized.\\n2. The methodology is clear and the problem is well-motivated.\\n3. This is a plug-and-play method that appears to integrate seamlessly with existing contrastive learning frameworks.\", \"weaknesses\": \"1. Compared to existing works, this work does not exhibit a notable advantage in classification performance.\\n2. I am somewhat confused about the experimental design for the transfer learning part: 1) Why was SHAR data selected instead of one of the datasets listed in Table 1 (e.g., UCIHAR) to evaluate the transfer capability of the algorithm? 2) Based on the experimental results in Table 2, the performance is lower than that reported in the reference work (Qian et al., 2022). What might be the reasons for this difference?\\nWhat did this experiment aim to prove?\\n3. In the ABLATION STUDIES part, the authors sequentially removed each of the three innovative method components for comparison. From the experimental results, the gains provided by the three modules seem roughly equivalent. If all three modules were removed, what would be the resulting performance? Looking at Table 1 in the paper, the performance of softCLT and FreRA appear quite similar. If FreRA were integrated into softCLT, would there be any gain in performance?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Responses to Reviewer 7F2V [1/2]\", \"comment\": \"We deeply appreciate the valuable feedback and constructive comments from the reviewer. We would like to address your questions below.\\n\\n> Q1. Compared to existing works, this work does not exhibit a notable advantage in classification performance.\\n\\nA1. We would like to highlight the overall performance improvement of our FreRA compared to advanced SOTA methods on the 5 benchmarks in Table 1, as presented in the table below. \\n\\n| | FreRA (Ours) | best(T) | best(F) | InfoMin | InfoTS | AutoTCL | TS2Vec | TNC | TS-TCC | TF-C | SoftCLT |\\n|-----------------|:------------:|:-------:|:-------:|:-------:|:------:|:-------:|:------:|:-----:|:------:|:-----:|:-------:|\\n| Overall ACC (%) | **90.66** | 85.28 | 86.20 | 86.16 | 88.56 | 68.22 | 87.84 | 61.66 | 83.52 | 67.30 | 89.52 |\\n| Overall RANK | **1.00** | 6.60 | 5.20 | 4.20 | 4.00 | 8.60 | 5.60 | 9.60 | 7.80 | 9.80 | 3.00 |\\n\\n\\nAlthough the performance gain of FreRA over SoftCLT on some datasets is relatively modest, this is primarily attributed to the use of the soft InfoNCE loss within its TS2Vec framework. For a fair comparison, we present the performance gains of our FreRA and the soft InfoNCE loss on the same TS2Vec framework in the table below (rows 2-3). We observe that FreRA brings larger improvements to the contrastive learning framework. \\n\\nMoreover, FreRA and the soft InfoNCE loss represent independent improvements to the contrastive learning framework. Due to the **plug-and-play** design of FreRA, these enhancements can be seamlessly integrated. The results in rows 3-4 demonstrate that incorporating FreRA into SoftCLT further introduces additional gains to its performance.\\n\\n| | CL framework | UCIHAR |MS|WISDM| the factor driving the performance improvements |\\n|:-:|:--:|:--:|:--:|:--:|--|\\n|1|original TS2Vec|0.959|0.945|0.939||\\n|2|FreRA + TS2Vec|0.970 (+0.011)|0.968 (+0.023) |0.957 (+0.018)|FreRA|\\n|3|soft InfoNCE + TS2Vec (SoftCLT)|0.961 (+0.002)|0.962 (+0.017) |0.952 (+0.013)|soft InfoNCE|\\n|4|FreRA + soft InfoNCE + TS2Vec (FreRA + SoftCLT)|0.969 (+0.008)|0.974 (+0.012)|0.956 (+0.004)|FreRA|\\n\\nMoreover, in the evaluations of transfer learning and anomaly detection tasks, our FreRA achieves significant improvements. This is evidence that FreRA can capture the inherent semantics of the time series and generalize to unseen data distributions and different downstream tasks. \\n\\nLastly, we highlight the significance and contributions of our work as follows:\\n \\n - **A novel perspective of designing automatic augmentation from the frequency domain.** We provide a novel perspective to a practical and less-explored problem, automatic augmentation for time series contrastive learning, from the frequency domain. We identify three properties: global, independent, and compact, which advance the view generation and are beneficial for the time series classification task.\\n\\n - **A simple yet effective frequency-domain automatic augmentation.** We develop FreRA, a lightweight and unified automatic augmentation method for contrastive representation learning in time series classification tasks. FreRA can be applied in a plug-and-play manner and is jointly optimized with the contrastive learning model.\\n\\n> Q2. Why was SHAR data selected instead of one of the datasets listed in Table 1 (e.g., UCIHAR) to evaluate the transfer capability of the algorithm?\\n\\nA2. We selected the SHAR dataset because it presents a **larger domain gap**, as discussed in [1]. This larger domain gap makes the task more difficult and allows us to better assess the generalization and transferability of our method. It also makes our conclusion that FreRA leads to transferable representations more convincing. \\n \\n> Q3. What might be the reasons for the performance difference with the reference work (Qian et al., 2022)?\\n\\nA3. The performance difference could be due to the different choices of the encoder and different hyper-parameter search scopes. The experiments in [1] aim to compare performances of various contrastive learning frameworks while our focus is mainly on the performance difference among different augmentations. It is worth noting that, within our work, we apply the same encoder and hyper-parameters search space, which constitute a **fair comparison** with baselines. \\n \\n[1] Qian, Hangwei, Tian Tian, and Chunyan Miao. \\\"What makes good contrastive learning on small-scale wearable-based tasks?.\\\" Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 2022.\"}",
"{\"title\": \"Responses to Reviewer Qixi [4/5]\", \"comment\": \"> Q9. Compared with some SOTA baselines, such as SoftCLT and InfoTS, the advantage of FreRA is not clear, especially on UEA and UCR datasets.\\n\\nA9. We would like to highlight the overall performance improvement of our FreRA compared to advanced SOTA methods on the 5 benchmarks in Table 1, as presented in the table below. \\n\\n| | FreRA (Ours) | best(T) | best(F) | InfoMin | InfoTS | AutoTCL | TS2Vec | TNC | TS-TCC | TF-C | SoftCLT |\\n|-----------------|:------------:|:-------:|:-------:|:-------:|:------:|:-------:|:------:|:-----:|:------:|:-----:|:-------:|\\n| Overall ACC (%) | **90.66** | 85.28 | 86.20 | 86.16 | 88.56 | 68.22 | 87.84 | 61.66 | 83.52 | 67.30 | 89.52 |\\n| Overall RANK | **1.00** | 6.60 | 5.20 | 4.20 | 4.00 | 8.60 | 5.60 | 9.60 | 7.80 | 9.80 | 3.00 |\\n\\n\\nAlthough the performance gain of FreRA over SoftCLT and InfoTS on the UEA and UCR archives is relatively modest, this is primarily attributed to the superior TS2Vec framework they both utilize. To illustrate, we present the baseline performances of TS2Vec and SimCLR in row 1 and row 4 of the table below. We notice that SimCLR is a weaker framework on the UEA and UCR archives. However, integrating FreRA to SimCLR results in significant improvements to the contrastive learning framework, which exceed the performance gains achieved by the augmentation in InfoTS and the soft InfoNCE loss in SoftCLT. This improvement eliminates the performance gap (0.018 for the UEA archive and 0.101 for the UCR archive) caused by the inferior SimCLR framework as compared to TS2Vec. \\n\\n| | CL framework | UEA Archive | UCR Archive | the factor driving the performance improvements |\\n|---|:------------------------------------------------:|----------------|:--------------:|-------------------------------------------------|\\n| 1 | original TS2Vec | 0.704 | 0.845 | |\\n| 2 | Information-Aware Augmentation + TS2Vec (InfoTS) | 0.714 (+0.010) | 0.849 (+0.004) | Information-Aware Augmentation |\\n| 3 | soft InfoNCE + TS2Vec (SoftCLT) | 0.751 (+0.047) | 0.850 (+0.005) | soft InfoNCE |\\n| 4 | original SimCLR | 0.686 | 0.744 | |\\n| 5 | FreRA + SimCLR | 0.754 (+0.068) | 0.850 (+0.106) | FreRA |\\n\\nMoreover, FreRA and the soft InfoNCE loss in SoftCLT represent independent improvements to the contrastive learning framework. Due to the **plug-and-play** design of FreRA, these enhancements can be seamlessly integrated. The results in rows 3-4 demonstrate that incorporating FreRA into SoftCLT further introduces additional gains to its performance.\\n\\n| | CL framework | UCIHAR | MS | WISDM | the factor driving the performance improvements |\\n|:-:|:-----------------------------------------------:|:--------------:|:--------------:|:--------------:|-------------------------------------------------|\\n|1| SoftCLT | 0.961 | 0.962 | 0.952 | |\\n|2| FreRA + SoftCLT | 0.969 (+0.008) | 0.974 (+0.012) | 0.956 (+0.004) | FreRA |\\n\\nNotably, in the evaluations of transfer learning and anomaly detection tasks, our FreRA achieves significant improvements. This is evidence that FreRA can capture the inherent semantics of the time series and generalize to unseen data distributions and different downstream tasks.\"}",
"{\"summary\": \"This paper proposes Frequency-Refined Augmentation (FreRA), an augmentation method for time series contrastive learning on classification tasks. FreRA automatically separates critical and unimportant frequency components, and accordingly proposes Identity Modification and Self-adaptive Modification for different components. It conducts experiments on two benchmark datasets including UCR and UEA archives, as well as 5 large-scale datasets on diverse applications. FreRA outperforms 10 leading baselines on time series classification, anomaly detection, and transfer learning tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Data augmentation is an important problem for time series and contrastive learning. This paper investigates this problem and proposes a method from the frequency perspective. The proposed method seems easy to follow and implement. The experiments in this paper are extensive, including many different datasets and tasks. The proposed method outperforms most of the baselines.\", \"weaknesses\": \"W1: Some important terms in this paper are not clearly defined, such as \\u2018semantic integrity\\u2019 and \\u2018critical and unimportant frequency components\\u2019. Why can we measure semantic integrity using mutual information? Is this consistent with humans\\u2019 understanding of semantics? How can we measure the importance of frequency components? What are these critical components critical for?\", \"w2\": \"The overall novelty of the proposed augmentation method is limited. Augmentation from the frequency components is not a new idea for time series. The main difference is a trainable vector s to control the augmentation of different components. It is unclear why s trained from Equation (7) can learn to select critical components automatically.\", \"w3\": \"It is unclear how FreRA achieves both semantic-preserving information and a considerable amount of variance. The authors need to clarify which designs correspond to these two sides respectively.\", \"w4\": \"The self-adaptive modification seems simple and tricky. It only uses the vector s and threshold to select and scale unimportant frequency components. The motivation for scaling these components is unclear.\", \"w5\": \"Compared with some SOTA baselines, such as SoftCLT and InfoTS, the advantage of FreRA is not clear, especially on UEA and UCR datasets. The ablation study is coarse, and some important variants are missing. For example, modifying all (or randomly selected) frequency components and modifying unimportant components randomly.\", \"questions\": \"Some other questions:\", \"q1\": \"As this paper focuses on time series classification, why is it also evaluated in anomaly detection?\", \"q2\": \"Figure 2 is hard to read. The different colors for vector blocks in FreRA are confusing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Kind reminder for discussion\", \"comment\": \"Dear Reviewer cXW7,\\n\\nWe would like to thank you again for providing valuable feedback and constructive suggestions. Please kindly let us know if our response has addressed your questions. We look forward to more discussions if you have further comments. \\n\\nBest,\\n\\nPaper 6561 Authors\"}",
"{\"title\": \"Responses to Reviewer cXW7 [2/3]\", \"comment\": \"> Q2. Although frequency methods can improve efficiency, it is unclear whether such methods mainly focus on the low-frequency part and ignore the high-frequency part which is more important for time series prediction.\\n\\nA2. The distribution of important frequency components **varies across datasets**, instead of having a fixed pattern, such as concentrating on a certain bandwidth. Our method FreRA is designed to **automatically identify the critical components** to preserve global semantic information. We verify this through the following two aspects.\\n\\n* First, we visualize the mutual information (MI) between the frequency components with the label in 3 datasets (Libras, ArticularyWordRecognition, and Epilepsy) by blue-grey bar plots in Figure 6 of our revised manuscript. As shown in the figure, the distribution of important frequency components (those with high MI values with labels) is **dataset-specific**. The important components are distributed in low frequencies, middle frequencies, and across multiple frequencies in these datasets, respectively. The **diversity of the distributions of important components across datasets** makes it unreasonable to directly apply current frequency-domain augmentation such as low- and high-pass filters. Therefore, an **adaptive** augmentation that can learn to identify the critical frequency information becomes practically useful. \\n\\n* Second, to further clarify the effectiveness of FreRA, we visualize the learned vector $\\\\mathbf{s}$ which determines the importance scores of all the frequency components, with the orange line plots in Figure 6. In the plots on all three datasets, despite diverse distributions, $\\\\mathbf{s}$ **consistently captures the inherent critical information** by learning to assign higher values to the most semantically relevant frequency components. \\n\\n> Q3. Do the authors consider the dependencies between channels, which is very significant for multivariate time series.\\n\\nA3. In existing methods, the cross-channel dependencies are usually captured by the encoder. It is in parallel with the augmentation strategy we seek to improve. During the data augmentation process, we conduct uniform transformations within all the channels, which has achieved **empirically competitive results on a wide range of multivariate time series datasets**, including UCIHAR, MS, WISDM, SHAR and the UEA archive. We fully agree that the cross-channel dependencies are important, although current automatic augmentations for multivariate time series have yet to incorporate this piece of information, including our FreRA. We look forward to exploring the incorporation of cross-channel dependency into augmentation strategy design in our future work.\"}"
]
} |
EAT5Jpa4ws | SHARE: Bridging Shape and Ray Estimation for Pose-Free Generalizable Gaussian Splatting | [
"Youngju Na",
"Taeyeon Kim",
"Jumin Lee",
"Kyu Beom Han",
"Woo Jae Kim",
"Sung-eui Yoon"
] | While generalizable 3D Gaussian Splatting enables efficient, high-quality rendering of unseen scenes, it heavily depends on precise camera poses for accurate geometry. In real-world scenarios, obtaining accurate poses is challenging, leading to noisy pose estimates and geometric misalignments. To address this, we introduce SHARE, a novel pose-free generalizable Gaussian Splatting framework that overcomes these ambiguities. Our ray-guided multi-view fusion network consolidates multi-view features into a unified pose-aware canonical volume, bridging 3D reconstruction and ray-based pose estimation. In addition, we propose an anchor-aligned Gaussian prediction strategy for fine-grained geometry estimation within a canonical view.
Extensive experiments on diverse real-world datasets show that SHARE achieves state-of-the-art performance in pose-free generalizable Gaussian splatting. | [
"3D Gaussian Splatting",
"Novel View Synthesis",
"Pose Estimation"
] | https://openreview.net/pdf?id=EAT5Jpa4ws | https://openreview.net/forum?id=EAT5Jpa4ws | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zDVagX82by",
"z21oDjLbWj",
"w3umDMcThw",
"vsyvEccwwh",
"qc6EbyWn9P",
"qCVSc7XOjp",
"nQMFMenBEc",
"lk9Liu5g7m",
"lTQYISJunY",
"hVySH2fcAO",
"fwCYagC82I",
"fL2N91WrBb",
"bcQtEJd3y3",
"ajZK9Aotq9",
"a4iJaT2NN8",
"ZTARK5QQhL",
"RfVUp5ojZq",
"Ko5DRJBJDP",
"KN2UH7Apwa",
"AKtj72P6b8",
"9JyfOMFhQZ",
"8ku23a9MrH",
"8FrXkYHaYj",
"6P8V8S6KXe",
"67ILD9z16m",
"4ZrVlgBriA",
"3qpRtH7luS",
"3QcZfQoBlB",
"2VmQwxlplI",
"209KEHQrwA",
"0KNvAivaV0"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1730474248262,
1732988657809,
1732264953029,
1733201765517,
1733201245190,
1730135340229,
1738577317124,
1732631028428,
1730466480643,
1732265181379,
1732264454671,
1732265584513,
1732778066197,
1732519743263,
1732632398958,
1732776367270,
1732264278510,
1732632482267,
1733032096870,
1732779014034,
1732264525487,
1732263610832,
1732633092600,
1732263638949,
1732265025925,
1732265432689,
1733127028836,
1732264107749,
1732553177426,
1733123130338,
1730300279574
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9315/Reviewer_rDog"
],
[
"ICLR.cc/2025/Conference/Submission9315/Reviewer_f2vj"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Reviewer_f2vj"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Reviewer_QjrW"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Reviewer_RF3w"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Reviewer_QjrW"
],
[
"ICLR.cc/2025/Conference/Submission9315/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9315/Reviewer_f2vj"
],
[
"ICLR.cc/2025/Conference/Submission9315/Reviewer_rDog"
],
[
"ICLR.cc/2025/Conference/Submission9315/Reviewer_RF3w"
]
],
"structured_content_str": [
"{\"summary\": \"This paper introduces SHARE, a framework for pose-free generalizable 3D Gaussian Splatting that addresses the challenge of multi-view 3D reconstruction from unposed images. SHARE's key innovation is a ray-guided multi-view fusion network that consolidates multi-view features into a unified pose-aware canonical volume, bridging 3D reconstruction and ray-based pose estimation. It also proposes an anchor-aligned Gaussian prediction strategy for fine-grained geometry estimation within a canonical view. The paper reports that SHARE achieves state-of-the-art performance in pose-free generalizable Gaussian splatting through experiments on diverse real-world datasets, including DTU and RealEstate10K.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper presents a novel framework that addresses the challenge of pose-free generalizable 3D Gaussian Splatting, which is an under-explored field in 3D scene reconstruction and novel view synthesis.\", \"The approach of using a ray-guided multi-view fusion network to consolidate features into a canonical volume for Gaussian prediction is creative.\", \"The language is clear and technical terms are well-defined, making the paper accessible to readers familiar with the field.\"], \"weaknesses\": \"- Insufficient baselines and experiments.\\nI think the paper lacks the comparison with the state-of-the-art pose-free multi-view reconstruction framework, i.e., DUST3R [1] (or its subsequent work MAST3R [2]), in terms of pose estimation accuracy and reconstruction quality. Also, several recent works built upon DUSt3R also explored pose-free generalizable Gaussian Splatting, e.g., Splatt3R [3] and InstantSplat [4], I believe that including experimental results and discussions on these methods (at least Splatt3R since it is feed-forward) would make the paper's claim stronger.\\n\\n- Potentially unfair comparison with pixelSplat and MVSplat. \\nThe authors report the view synthesis results of pixelSplat and MVSplat using \\\"poses predicted by our method\\\" in Table 1 and Table 2. However, we are not clear about the quality of the pose prediction results of SHARE due to the lack of evaluations on pose estimation accuracy. What if we feed (potentially) more robust predicted poses to them, such as the outputs of MAST3R? \\nBesides, I notice that the results on DTU in Figure 5 are from 3 input views, while the original pixelSplat and MVSplat models were trained on paired images. How did the authors adapt them to 3 input views?\\n\\n- Small camera baselines and scalability. \\nThe proposed framework utilizes plane-sweep volumes and predicts all Gaussians from a canonical feature volume, raising concerns on its reconstruction capability on more challenging input views, such as large camera baselines and occusions. The qualitative results shown in the paper demonstrate small camera movements compared to the input view, I hope the authors can include some discussions on the upper limit and scalability to more diverse datasets of the proposed method. \\n\\n[1] Wang, Shuzhe, et al. \\\"Dust3r: Geometric 3d vision made easy.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. \\n[2] Leroy, Vincent, Yohann Cabon, and J\\u00e9r\\u00f4me Revaud. \\\"Grounding Image Matching in 3D with MASt3R.\\\" arXiv preprint arXiv:2406.09756 (2024). \\n[3] Smart, Brandon, et al. \\\"Splatt3r: Zero-shot gaussian splatting from uncalibarated image pairs.\\\" arXiv preprint arXiv:2408.13912 (2024). \\n[4] Fan, Zhiwen, et al. \\\"Instantsplat: Unbounded sparse-view pose-free gaussian splatting in 40 seconds.\\\" arXiv preprint arXiv:2403.20309 (2024).\", \"questions\": \"My main questions have been listed in the weakness part, I will adjust my final rating accoding to the author's response. I suggest the authors to visualize all the input images in Figure 5 and Figure 2 of the supplementary, intead of labeling \\\"Input view (1/3)\\\" on the top. It is hard for readers to measure the view synthesis quality from only one input view.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to the authors\", \"comment\": \"Thank you for your efforts throughout the rebuttal. All my concerns have been resolved.\"}",
"{\"title\": \"Response to Reviewer RF3w (1/3)\", \"comment\": \"> As for the lack of discussions, the idea of introducing Plucker ray maps to represent camera poses has been introduced in CAT3D[1]. The authors should discuss about their differences, at least. Some related discussions and comparisons should be necessary to validate the effectiveness of this work.\\n> \\n\\nWe sincerely thank the reviewer for the insightful suggestion, which has significantly enhanced the clarity and depth of our work. In response,\\u00a0we have incorporated a discussion on CAT3D [1] and related methods that utilize Pl\\u00fccker rays as pose information in **Section 3** of our paper.\\n\\nTo clarify the distinction, our method fundamentally differs from these approaches by operating in a **pose-free setting**. While methods like CAT3D leverage ray-represented ground-truth poses as conditioning inputs or feature embeddings, our approach jointly learns to predict Pl\\u00fccker rays and 3D Gaussians in a feed-forward manner directly from input images. The predicted rays are integral to our multi-view fusion pipeline, as they provide geometric guidance to resolve ambiguities inherent in pose-free scenarios. This design enables our method to achieve robust performance without relying on explicit pose information, setting it apart from prior works.\\n\\n[1] Cat3d: Create anything in 3d with multi-view diffusion models\\n\\n---\\n\\n> As for the pose-free generalizable prediction of 3DGS primitives, Splat3R[2] also proposes another effective solution by estimating the camera poses through DUST3R[3].\\n> \\n\\nThank you for your valuable suggestions. We have compared our work with the concurrent work Splatt3R[1], which utilizes pre-trained MASt3R[2] weights for geometry estimation. We observed that Splatt3R faces a significant scale-ambiguity issue when applied to out-of-distribution data not seen during the training of MASt3R. The estimated scale of the reconstructed point clouds often misaligns with the scale of the camera poses for novel view rendering.\\n\\nTo address this, we attempted to fine-tune Splatt3R on the target datasets (RealEstate10K and DTU) using photometric loss. However, this approach led to convergence issues, with the model output blurry reconstructions. This behavior can be attributed to Splatt3R's reliance on geometry estimation from MASt3R, which requires ground-truth dense depths to mitigate the scale-ambiguity issue. Unfortuantely, our target datasets present challenges in this regard: RealEstate10K lacks ground-truth depths, and DTU provides only sparse, masked depth maps, making it difficult to adapt Splatt3R directly without significant modifications.\\n\\nTo provide a fair baseline, we evaluated the pre-trained Splatt3R model (trained on ScanNet++) directly on our datasets under its original training conditions. We included both in-dataset (Table A and B) and cross-dataset (Table C) generalization tests. We have included these results in the supplementary material, **Appendix A.4**, with a detailed discussion of the experimental settings, evaluation metrics, and qualitative visualizations.\\n\\n- Table A (comparison with Splatt3R on DTU dataset)\\n| DTU | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---|---|---|---|\\n| Splatt3R | 11.78 | 0.28 | 0.57 |\\n| Ours | **17.50** | **0.34** | **0.48** |\\n- Table B (comparison with Splatt3R on RealEstate10K dataset)\\n| RealEstate10K | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---|---|---|---|\\n| Splatt3R | 15.80 | 0.53 | 0.30 |\\n| Ours | **21.23** | **0.71** | **0.26** |\\n- Table C (comparison with Splatt3R on cross-dataset generalization test with ACID[3] datasets)\\n| **ACID** | **Training Data** | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---|---|---|--|---|\\n| Splatt3R | ScanNet++ | 17.49 | 0.63 | **0.26** |\\n| Ours | RealEstate10K | **23.47** | **0.69** | **0.26** |\\n\\nTable A and B shows that our method significantly outperform on both datasets with a large margin. In addition, to discuss about the cross-dataset generalization quality, we tested our method (trained on RealEstate10K) and Splatt3R (trained on ScanNet) on the ACID[3] dataset. The table shows that our method shows superior performance in all metrics, underscoring the robustness and generalizability of our approach. These results validate our method\\u2019s effectiveness in pose-free multi-view reconstruction, even in challenging scenarios without ground-truth depth supervision.\\n\\n[1] Smart, Brandon, et al. \\\"Splatt3r: Zero-shot gaussian splatting from uncalibrated image pairs.\\\" arXiv.\\u00a02024.\\n\\n[2] Leroy, Vincent, Yohann Cabon, and J\\u00e9r\\u00f4me Revaud. \\\"Grounding image matching in 3d with mast3r.\\\"\\u00a0ECCV, 2024.\\n\\n[3] Liu, Andrew, et al. \\\"Infinite nature: Perpetual view generation of natural scenes from a single image.\\\"\\u00a0ICCV. 2021.\"}",
"{\"title\": \"Official Comment for Reviewer QjrW\", \"comment\": \"**Response to Q1**\\n\\nThank you for your thoughtful comments. We appreciate the suggestion to introduce Scaffold-GS as a preliminary, and we agree that this will provide important context for our work. In the revised manuscript, **we will include a detailed discussion of Scaffold-GS** in **Section 4.3** of our paper, which will establish it as a foundational approach and help readers better understand how our method builds upon it. We will also **clearly highlight the two key differences** between our approach and Scaffold-GS:\\n\\n- Specifically, while Scaffold-GS uses voxelized centers derived from SfM reconstructions with dense views to define anchor points, our method takes a pose-free approach and predicts pixel-aligned anchor points in a canonical space in a data-driven manner.\\n- Furthermore, unlike Scaffold-GS, which relies on iterative optimization and ground-truth camera poses for scene-specific adjustments, our approach generalizes to unseen scenes without requiring such optimization, providing a significant distinction in flexibility and applicability.\\n\\nWe hope this addresses your concerns, and we are grateful for the constructive feedback.\\n\\n---\\n\\n**Response to Q2**\\n\\nWe apologize for any confusion regarding the cross-dataset experimental results. As you correctly pointed out, these results are mentioned in the **common comments section**, but we\\u2019ve moved the reply below for your convenience.\\n\\nWe conducted cross-dataset experiments, evaluating our model trained with RealEstate10K dataset on ACID dataset (following pixelsplat, mvsplat), and our model trained with DTU on BlendedMVS dataset (following sparseneus[1], uforecon[2]), following related papers.\\n\\n| | | **RealEstate10K \\u2192 ACID** | | | **DTU \\u2192 BlendedMVS** | | |\\n|-|-|-|-|-|-|-|-|\\n|Method|Pose|**PSNR\\u2191**|**SSIM\\u2191**|**LPIPS\\u2193**|**PSNR\\u2191**|**SSIM\\u2191**|**LPIPS\\u2193**|\\n|**PixelSplat** | GT | 26.84 | 0.81 | 0.18 | 11.64 | 0.20 | 0.67 |\\n| |\\u03c3 = 0.01 | 21.73 | 0.57 | 0.28 | 11.65 | 0.20 | 0.68 |\\n|**MVSplat**|GT|28.18|0.84|0.15|12.04 | 0.19 | 0.56 |\\n| | \\u03c3 = 0.01|21.65|0.57|0.27|11.92|0.20|0.59 |\\n|**Ours**|-|23.47|0.69|0.26|12.19|0.26|0.61|\\n\\nAs shown in the table, our method exhibits strong generalizability, performing comparably to or even surpassing the baselines that utilize GT poses. We also compared the baseline methods with minimal gaussian noise level (sigma=0.01), where rotation and translation angular errors are far lower than the state-of-the-art pose estimators. We included the comprehensive quantitative (**Table 6**) and qualitative results (**Figure 11**) in the **Appendix A.4.**\\n\\nIn choosing the evaluation datasets, we considered two factors: 1) to ensure a fair comparison, we followed the baseline methods and built upon the established conventions, and 2) the intended use of the datasets, which differ in terms of scene types (e.g., RealEstate10K \\u2192 ACID for indoor/outdoor scenes) and focus (e.g., DTU \\u2192 BlendedMVS for object-centered evaluation).\\n\\n[1] Long et al. Sparseneus: Fast generalizable neural surface reconstruction from sparse views, ECCV 2022\\n\\n[2] Na et al. UFORecon: Generalizable Sparse-View Surface Reconstruction from Arbitrary and UnFavOrable Data Sets, CVPR 2024\\n\\n---\\n\\n**Response to Q3**\\n\\nThank you for your suggestion. We agree that per-scene optimization methods, such as SPARF, effectively alleviate pose noise by jointly optimizing the noisy poses. We'll include SPARF in the Appendix, as you suggested.\\n\\nHowever, **the problem setting of SPARF involves per-scene optimization, which is beyond the scope of our approach**. Our method, along with the baseline approaches we compare, assumes a feed-forward solution, making it difficult to directly compare with per-scene optimization approaches. In addition, SPARF considers **noisy** pose as input, which requires a reasonable starting point, while ours assumes **pose-free** scenarios. \\n\\n---\\n\\n**Response to Q4**\\n\\nApologies for the confusion. To clarify, the **experimental setting for this study is based on the DTU dataset with large camera baselines.** We didn\\u2019t include this in the ablation study, as they are already established technique and commonly used in building cost volumes. However, we plan to add them to the supplementary materials for the final revision, as they provide meaningful improvements.\\n\\nAlso, regarding the values you mentioned, we normalized the camera baseline following the baseline methods (PixelSplat, MVSplat) during the discussion period for the cross-dataset generalizability, which led to slight value changes across all datasets. Specifically, the values you mention has been changed from 19.09dB \\u2192 18.78dB (refer to larger baseline results in Table 1). Accordingly, we have to re-run the mean-variance experiments. However, as these results will require 2-3 more days of work, we won\\u2019t be able to include them within the remaining discussion period. Nonetheless, we expect similar trends in the updated experiments and will be included in the revised manuscript.\"}",
"{\"title\": \"Official Comment for Reviewer rDog\", \"comment\": \"Thank you for your thoughtful and detailed comments. We appreciate your valuable suggestion for improving the fairness of the evaluation.\\n\\nFirst, we would like to clarify the issue of scale ambiguity in Splatt3R. They rely on point clouds predicted by the pre-trained MASt3R, which is designed to generate point clouds in a metric scale. However, due to inherent inaccuracies in the prediction process, there is a discrepancy between the estimated scale of the point clouds and the ground-truth scale. This misalignment leads to poor rendering, particularly in the form of distorted or inconsistent results.\\n\\nIn fact, to address this, we included a pose rescaling step as we found that directly using the ground-truth pose scale led to render black images. Therefore, during the rendering process, we manually rescale the ground-truth poses by normalization based on the scale of the predicted point clouds.\\n\\nWe acknowledge that this scale ambiguity is an inherent limitation of Splatt3R. However, as you pointed out, rescaling the poses based on the predicted scale could offer a more consistent and fairer evaluation. In response, we have conducted additional experiments using rescaled target poses derived from the predicted camera poses. We denote with **Splatt3R*** in below table for the rescaled target pose with the predicted pose.\\n\\n- Table A\\n| DTU | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---|---|-----|----|\\n| Splatt3R | 11.78 | 0.28 | 0.57 |\\n| Splatt3R* | 12.53 | 0.38 | 0.49 |\\n| Ours | **17.50** | **0.34** | **0.48** |\\n- Table B\\n| RealEstate10K | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|----|----|----|----|\\n| Splatt3R | 15.80 | 0.53 | 0.30 |\\n| Splatt3R* | 15.14 | 0.48 | 0.39 |\\n| Ours | **21.23** | **0.71** | **0.26** |\\n- Table C\\n| **ACID** | **Training Data** | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---|---|---|---|---|\\n| Splatt3R | ScanNet++ | 17.49 | 0.63 | **0.26** |\\n| Splatt3R* | ScanNet++ | 19.81 | **0.71** | **0.26** |\\n| Ours | RealEstate10K | **23.47** | 0.69 | **0.26** |\\n\\nDespite these efforts, we found that introducing this modification during evaluation does not fully resolve the scale ambiguity. A key challenge is that predicted poses are not entirely accurate and contain inherent errors. Consequently, an incorrect scaling factor may be derived, leading to errors in the rescaled target pose. These inaccuracies can further introduce rendering issues, as they may distort the relative pose of the target camera, which impacts the final result.\\n\\nIn summary, while we agree that the rescaling approach you suggested provides a more consistent evaluation metric, it does not fully eliminate the scale ambiguity inherent in Splatt3R.\\n\\nWe hope that our additional experiments and clarifications provide a better justification of the evaluation procedure and address your concerns. Thank you again for your constructive feedback.\"}",
"{\"summary\": \"This paper introduces a pose-free generalizable Gaussian Splatting framework that leverages a feed-forward network to directly regress camera poses and Gaussians from unposed sparse RGB images. The authors propose two key modules to enhance the performance: Ray-Guided Multi-View Fusion, which consolidates multi-view features into a canonical volume using Pl\\u00fccker rays for pose estimation and scene geometry estimation, and Anchor-Aligned Gaussian Prediction, which predicts anchor points and offsets to generate refined Gaussian Splatting for detailed reconstruction. These modules enable the proposed framework outperform previous methods on benchmarks like DTU and RealEstate10K.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is clearly written with informative illustrations. The proposed framework is intuitive and the motivation behind each module is well-explained. The evaluation and ablation results validate the impact of the proposed components.\", \"The idea of coarse-to-fine Gaussian splatting generation using anchor-aligned Gaussian prediction is innovative and effective.\"], \"weaknesses\": \"- The main concern is the lack of comparison with state-of-the-art (SOTA) pose estimation methods like COLMAP, DUSt3R[1], and MASt3R[2]. The proposed method should compare baselines like camera poses from COLMAP or DUSt3R plus MVSplat/PixelSplat. While I expect COLMAP may not perform very well given the sparse image input, DUSt3R/MASt3R is promising to give relatively accurate pose estimiation, as the paper InstantSplat[3] shows.\\n- The paper lacks evaluation in cross-dataset or in-the-wild settings, which raises concerns about the generalizability of the proposed methods, particularly in terms of pose estimation.\\n [1]: Wang S, Leroy V, Cabon Y, et al. Dust3r: Geometric 3d vision made easy[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 20697-20709.\\n [2]: Leroy V, Cabon Y, Revaud J. Grounding Image Matching in 3D with MASt3R[J]. arXiv preprint arXiv:2406.09756, 2024.\\n [3]: Fan Z, Cong W, Wen K, et al. Instantsplat: Unbounded sparse-view pose-free gaussian splatting in 40 seconds[J]. arXiv preprint arXiv:2403.20309, 2024.\", \"questions\": [\"It's acceptable that the method is unable to outperforms pixelsplat/ MVSplat with GT pose assumption, since it's imfeasible to obtain such accurate poses using sparse image input. But as mentioned in the weekness, we need to see whether it can outperform other methods using SOTA camera pose estimation.\", \"Are the baselines shown in table trained with GT cmaera poses or noisy camera poses?\", \"The implementation details of other baselines seem to be missing in the Appendix/Supplementary, which is cliamed in line 405. The detailed implementation of all the network structure is also missing, as cliamed in line 419.\", \"The equation related to \\\\delta p is missing in equation 5. Based on Figure 4, it appears to be derived using network f_p. However, f_p in Equation 5 is used to generate Gaussian attributes, which creates some misalignment between the figure and the equation.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We would like to withdraw our submission from ICLR OpenReview. Thank you for your consideration.\"}",
"{\"title\": \"Response to Reviewer f2vj\", \"comment\": \"Dear reviewer f2vj,\\n\\nThank you for your response and for bringing up the detailed discussion.\\n\\n> Could you clarify why you mentioned that Mast3R and Dust3R rely on ground-truth dense depth maps? \\n\\nAs you mentioned, Dust3R and MASt3R only require RGB inputs during inference. What we wanted to clarify was that these methods use ground-truth depth maps during training, whereas our approach does not require ground-truth depth maps at any stage of training.\\n\\n---\\n\\n> However, is it possible to train them using the estimated poses from MASt3R or DUSt3R? \\n\\nWe sincerely appreciate your detailed discussion and agree that this is an interesting experiment. To verify this, we trained MVSplat using poses predicted with dust3r. However, even with the poses predicted with DUSt3R[1], we found that the training of such baselines remains unstable and leads to degenerate performances. As we have claimed, this instability arises from the high sensitivity of these baselines to pose errors. To further clarify this discussion, we have updated the visualization in **Figure 10** in **Appendix A.2**.\\n\\n[1] Wang S, Leroy V, Cabon Y, et al. \\u201cDust3r: Geometric 3d vision made easy\\u201d CVPR, 2024\"}",
"{\"summary\": \"This paper focuses on generalizable Gaussian splatting from sparse unposed images. To this end, it employs the Pl$\\\\ddot{u}$cker ray representation for relative pose estimation. Based on the ray representation, it builds cost volumes from extracted image features. Moreover, it embeds the ray representation into the cost volumes using patch-wise cross attention. After aggregating these cost volumes, a geometry volume and feature volume are obtained to construct Gaussians. This works employs anchor points to distribute local Gaussians. By optimizing both the Gaussians and ray representation, it can recover the pose and 3D scene at the same time. Experiments on DTU and RealEstate10K verify the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduces the Pl$\\\\ddot{u}$cker ray representation for relative pose estimation instead of directly predicting camera rotation and translation.\\n2. Based on the pose representation, the proposed method embeds learned pose information into the cost volume to improve the Gaussian learning. \\n3. For Gaussian learning, this work leverage anchor points to distribute local Gaussian, which can hierarchically learn intricate textures or complex geometries.\", \"weaknesses\": \"1. The proposed method relies on cost volume construction, which requires depth range priors. Moreover, can you discuss the limitation of the proposed unable on tackling unbounded $360^{\\\\circ}$ scenes?\\n2. In fact, the anchor point idea used in this work is proposed by Scaffold-GS [1]. Can you clarify the difference between your use and Scaffold-GS? Maybe it is better to present a preliminary to introduce it as the scene representation. \\n[1] Lu, Tao, et al. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n3. The proposed method are trained on DTU and RealEstate10K, respectively. Then, the trained models are used to test the corresponding datasets. This cannot verify the generalizable ability of the proposed method. Can you test the proposed method on RealEstate10K with the model trained on DTU, and test the proposed method on DTU with the model trained on RealEstate10K?\", \"questions\": \"1. For the generalizable ability, can the model trained on one dataset generalize to different datasets? For example, can the model trained on RealEstate10K be used to test Tanks and Temples datasets [2]?\\n[2] Knapitsch, Arno, et al. \\\"Tanks and temples: Benchmarking large-scale scene reconstruction.\\\" ACM Transactions on Graphics (ToG) 36.4 (2017): 1-13. \\n2. This work uses the ray representation from RayDiffusion. Can you compare the pose estimation performance with RayDiffusion?\\n3. For the pose-required methods, such as MVSplat, I am wondering if their rendering performance will improve if they are trained with the pose information estimated by the proposed method or RayDiffusion.\\n4. In fact, the weighted cost volume in Eq. (3) can reflect the complex visibility information better. Why the mean and variance-based volume is added? Can you have an experiment on this? \\n5. Can you show the efficiency of the proposed method in terms of inference time and GPU memory usage? Can the proposed method tackle higher-resolution input images, such as the original-resolution images in DTU and Tanks and Temples?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer RF3w (3/3)\", \"comment\": \"> **How is the ground truth Plucker rays acquired? Are they acquired with the original camera poses?**\\n> \\n\\nThe ground truth Pl\\u00fccker rays are derived from the conventional 4 $\\\\times$ 4 extrinsic matrices available during training. These matrices are directly transformable to Pl\\u00fccker ray representations and vice versa, as described in Section 4.1 of our paper. For this transformation, we adhere to the formulation detailed in Cameras as Rays [1].\\n\\nIt is important to emphasize that this reliance on camera poses is strictly limited to the training phase. During inference, our method completely eliminates the need for any explicit pose assumptions, ensuring a fully pose-independent inference pipeline.\\n\\n[1] Zhang, Jason Y., et al. \\\"Cameras as Rays: Pose Estimation via Ray Diffusion.\\\", NeurIPS, 2024.\\n\\n---\\n\\n> **How is the performances on more challenging dataset, e.g., Scannet?**\\n> \\n\\nTo evaluate our method across datasets of varying scales, we conducted experiments on the DTU dataset, which contains smaller scenes, and the RealEstate10K dataset, which includes larger indoor and outdoor scenes. Additionally, we rigorously assessed the cross-dataset generalization capabilities of our approach on the ACID [1] and BlendedMVS [2] datasets, both of which feature diverse indoor and outdoor scenarios, to further demonstrate the robustness of our method. We kindly direct the reviewer to the common comments in the top-level response. \\n\\nWe appreciate the reviewers\\u2019 suggestion to test on additional datasets, such as ScanNet[3] or ScanNet++[4], to further validate our approach. While obtaining access to large-scale datasets requires additional time due to permission and resource constraints, we plan to include results on such datasets and ensure they are incorporated into the final revision. This extension will provide a more comprehensive evaluation of our method. \\n\\n[1] Liu, Andrew, et al. \\\"Infinite nature: Perpetual view generation of natural scenes from a single image.\\\"\\u00a0*ICCV. 2021.\\n\\n[2] Yao, Yao, et al. \\\"Blendedmvs: A large-scale dataset for generalized multi-view stereo networks.\\\"\\u00a0CVPR. 2020.\\n\\n[3] Dai et al. \\\"ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes\\\" CVPR. 2017.\\n\\n[4] Yeshwanth et al. \\\"ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes\\\" ICCV. 2023\\n\\n---\\n\\n> **Besides, I am also curious about the number of Gaussian primitives acquired by this framework. As only one canonical view is used for prediction, would the number of predicted primitives less than other methods?**\\n> \\n\\nOur approach is designed to maintain a fixed number of Gaussian primitives, regardless of the number of input views. This is achieved by predicting 3 offsets per Gaussian, as detailed in the main paper. This strategy ensures an efficient and consistent representation while effectively capturing the scene geometry.\\n\\nIn contrast, baseline methods typically experience a linear increase in the number of Gaussians as the number of input views grows, resulting in higher computational costs. Furthermore, our design is inherently robust to geometry misalignments caused by pose errors, as it avoids introducing any misalignment in 3D space, ensuring stable representations.\"}",
"{\"title\": \"Response to Reviewer QjrW (2/3)\", \"comment\": \"> **This work uses the ray representation from RayDiffusion. Can you compare the pose estimation performance with RayDiffusion?**\\n\\n| Method | Rot. \\u2193 | Trans. \\u2193 |\\n| --- | --- | --- |\\n| COLMAP | 7.10 | 31.62 |\\n| Relpose++ | 19.56 | 44.18 |\\n| RayRegression | 3.10 | 6.57 |\\n| DUSt3R | 1.77 | 13.66 |\\n| MASt3R | 2.40 | 3.52 |\\n| **Ours** | 2.74 | 6.28 |\\n\\nWe compared the performance of our pose estimation method against state-of-the-art approaches, including **RayRegression** proposed in the RayDiffusion paper [1]. We would like to respectfully clarify that we chose to compare with RayRegression rather than RayDiffusion because, as noted in **Table 6** of the RayDiffusion [1] appendix, regression-based methods require only 0.1 seconds for inference, whereas diffusion-based methods (RayDiffusion) take 11.1 seconds. While diffusion-based approaches may offer slightly improved performance, their slower inference times make them less suitable for our pipeline, which prioritizes efficiency.\\n\\nOur method demonstrates improved performance compared to RayRegression, highlighting the synergistic benefits of jointly optimizing shape and camera rays. Meanwhile, it is important to emphasize that our method is primarily designed for pose-free novel view rendering, with pose estimation being an auxiliary outcome of the process.\\n\\n[1] Zhang, Jason Y., et al. \\\"Cameras as Rays: Pose Estimation via Ray Diffusion.\\\", NeurIPS, 2024.\\n\\n---\\n\\n> **For the pose-required methods, such as MVSplat, I am wondering if their rendering performance will improve if they are trained with the pose information estimated by the proposed method or RayDiffusion.**\\n> \\n\\nIn the early stages of our research, we explored the idea of training a generalizable 3DGS model with predicted poses or in an end-to-end manner alongside a pose prediction model. However, we found that using noisy or predicted poses caused instability during training and a tendency to converge to local optima.\\nThis instability arises from the sensitivity of pixel-aligned methods (e.g., MVSplat) to pose estimation accuracy, where even small inaccuracies can lead to geometric misalignments (as conceptualized in **Figure 2**). Consequently, these methods struggle to train effectively without highly accurate poses. We have expanded on the discussion in **Appendix A.2** and provided a visualization of training failure in **Figure 10** for further clarity.\\n\\n---\\n\\n> **In fact, the weighted cost volume in Eq. (3) can reflect the complex visibility information better. Why the mean and variance-based volume is added? Can you have an experiment on this?**\\n> \\n\\nWe appreciate the reviewer\\u2019s suggestion for this analysis. As explained in **Section 4.2** on canonical volume construction, the mean-variance volume is designed to mitigate the risk of a trivial solution where a single view disproportionately dominates the fusion process, drawing inspiration from MVSNet [1]. This design enhances training stability by ensuring balanced contributions from all views, avoiding over-reliance on any single view. Our experimental results demonstrate improved performance with the inclusion of mean-variance volume.\\n\\n| Method | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n| --- | --- | --- | --- |\\n| w/o mean-var volume | 18.10 | 0.55 | 0.33 |\\n| Ours | **19.09** | **0.64** | **0.29** |\\n\\n[1] Yao, et al. \\\"Mvsnet: Depth inference for unstructured multi-view stereo.\\\"\\u00a0*ECCV,* 2018.\\n\\n---\\n\\n> **Can you show the efficiency of the proposed method in terms of inference time and GPU memory usage?**\\n> \\n\\nWe thank the reviewer for their valuable suggestion. Following the recommendation, we measured the inference time efficiency and GPU memory consumption of our method, alongside CoPoNeRF[1], MVSplat combined with pose estimator (MASt3R[2]), and Splatt3R[3] on the RealEstate10K dataset. We tested all experiments with RTX 3080. The results are presented in the table below.\\n\\n| Method | Inference Time (s) | GPU Memory (MB) |\\n| --- | --- | --- |\\n| CoPoNeRF | 3.37 | 9587.22 |\\n| MVSplat + Mast3r | 0.22 | **4376.94** |\\n| Splatt3R | 0.26 | 6198.00 |\\n| **Ours** | **0.17** | 5887.18 |\\n\\nWe would also like to highlight that our approach achieves superior rendering quality compared to all baselines. Additional experimental details and the corresponding table have been included in **Appendix A.4** for further reference.\\n\\n[1] Hong, Sunghwan, et al. \\\"Unifying Correspondence Pose and NeRF for Generalized Pose-Free Novel View Synthesis.\\\"\\u00a0CVPR, 2024\\n\\n[2] Leroy, Vincent, Yohann Cabon, and J\\u00e9r\\u00f4me Revaud. \\\"Grounding image matching in 3d with mast3r.\\\"\\u00a0*ECCV*, 2025.\\n\\n[3] Smart, Brandon, et al. \\\"Splatt3r: Zero-shot gaussian splatting from uncalibrated image pairs.\\\" arXiv.\\u00a02024.\"}",
"{\"title\": \"Response to Reviewer f2vj (2/2)\", \"comment\": \"> **The paper lacks evaluation in cross-dataset or in-the-wild settings, which raises concerns about the generalizability of the proposed methods, particularly in terms of pose estimation.**\\n> \\n\\n**We kindly direct the reviewer to the first response of the shared official comments,** which provide comprehensive cross-dataset evaluations. We hope these comments cover the points raised and align with the context of this question.\\n\\n---\\n\\n> **Are the baselines shown in table trained with GT cmaera poses or noisy camera poses?**\\n> \\n\\nThe baselines are trained with GT camera poses, as clarified in the updated **Appendix A.2**, where we provide detailed explanations of the baseline design. In the early stages of our research, we explored the idea of training a generalizable 3DGS model with predicted poses or in an end-to-end manner alongside a pose prediction model. \\n\\nHowever, we found that using noisy or predicted poses caused instability during training and a tendency to converge to local optima. This instability arises from the sensitivity of pixel-aligned methods (e.g., MVSplat) to pose estimation accuracy, where even small inaccuracies can lead to geometric misalignments (as conceptualized in **Fig 2**). Consequently, these methods struggle to train effectively without highly accurate poses. To address this, we have expanded on the discussion in **Appendix A.2** and provided a visualization of training failure in **Figure 10** for further clarity.\\n\\n---\\n\\n> **The implementation details of other baselines seem to be missing in the Appendix/Supplementary, which is cliamed in line 405.**\\n> \\n\\n> **The detailed implementation of all the network structure is also missing, as cliamed in line 419.**\\n> \\n\\n> **The equation related to \\\\delta p is missing in equation 5. Based on Figure 4, it appears to be derived using network f_p. However, f_p in Equation 5 is used to generate Gaussian attributes, which creates some misalignment between the figure and the equation.**\\n> \\n\\nWe sincerely thank the reviewer for pointing out these important details. We apologize for the oversight in the earlier version and have now included detailed implementation descriptions in **Appendix A.2. and A.3.**. Additionally, the inconsistencies between **Equation 5** and **Figure 4** have been addressed and revised in both the text and the figure for improved clarity.\"}",
"{\"title\": \"Official Comment for Reviewer QjrW\", \"comment\": \"As the PDF revision period is drawing to a close and the discussion deadline is approaching, we would like to remind you of our improvements kindly. During the review period, we were able to improve our paper thanks to your valuable feedback. Based on your suggestions, we made the following enhancements:\\n\\n- We clarified the differentiation of our work from additional related works, such as **Scaffold-GS[1]**.\\n- We validated our method's **cross-dataset generalizability**, showing robustness on different datasets.\\n- By **comparing pose estimation performance with RayRegression[2]**, we illustrated that jointly learning both shape and pose leads to a synergistic improvement in performance.\\n- We included results that show how **training pose-required methods with estimated poses leads to instability**, which further emphasizes the effect of jointly learning the shape and pose.\\n- Through additional **ablation studies on the mean-variance volume**, we were able to assess the effectiveness of our pipeline design further.\\n- We presented an **efficiency test** that shows that our method outperforms our concurrent work, Splatt3R[3], in terms of reconstruction as well as inference time and GPU memory usage. \\n\\nOnce again, we sincerely appreciate your valuable feedback. We would greatly appreciate it if you could take these changes and the resulting experimental improvements into consideration when finalizing your review. If you have any further questions or concerns, we are happy to address them.\\n\\n---\\n\\n[1] Lu, Tao, et al. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. CVPR, 2024.\\n\\n[2] Zhang, Jason Y., et al. \\\"Cameras as Rays: Pose Estimation via Ray Diffusion.\\\", NeurIPS, 2024.\\n\\n[3] Smart, Brandon, et al. \\\"Splatt3r: Zero-shot gaussian splatting from uncalibrated image pairs.\\\" arXiv. 2024.\"}",
"{\"title\": \"Response to the authors\", \"comment\": \"Thanks for the careful response of the authors. Most of my concerns are resolved. I will keep my rating.\"}",
"{\"title\": \"Kind reminder for reviewer-author discussion\", \"comment\": \"Dear Reviewer rDog,\\n\\nAs the discussion period is ending soon, we wanted to kindly remind you of our responses to your comments. We truly value your feedback and are happy to answer any remaining questions or concerns you might have.\\n\\nPlease feel free to let us know if there is any more information we can provide to help with the discussion.\\n\\nThank you again for your time and thoughtful review.\\n\\nBest regards,\\nAuthors\"}",
"{\"title\": \"Official Comment for Reviewer rDog\", \"comment\": \"As the PDF revision period is drawing to a close and the discussion deadline is approaching, we would like to kindly remind you of our improvements. We first want to thank your insightful feedback, which has greatly contributed to improving our paper. Based on your suggestions, we have made the following enhancements:\\n\\n- We have incorporated a recent concurrent work, **Splatt3R**[1], into our baseline comparison. This allows us to demonstrate that our approach outperforms the existing pose-free novel view synthesis method.\\n\\n- We combined pose-dependent **baselines (pixelSplat[2], MVSplat[3]) with various state-of-the-art pose estimators**, enhancing the thoroughness of the evaluation and baselines. This experiment further supports our claim that simply combining pose estimators with pose-dependent methods leads to geometry misalignment.\\n\\n- To strengthen the presentation of our results, we included **qualitative results for large baselines**, improving the overall clarity of our paper.\\n\\n- We conducted additional experiments on datasets such as ACID[4] and BlendedMVS[5], further demonstrating the **cross-dataset generalizability** of our approach.\\n\\nWe are pleased with the improvements we have made to the paper as a result of your valuable feedback. Once again, thank you for your thoughtful comments, and we kindly ask that you consider reflecting on these changes and the resulting experimental improvements in your review scores.\\n\\n---\\n\\n[1] Smart, Brandon, et al. \\\"Splatt3r: Zero-shot gaussian splatting from uncalibrated image pairs.\\\" arXiv. 2024.\\n\\n[2] Charatan, David, et al. \\\"pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction.\\\" CVPR, 2024.\\n\\n[3] Chen, Yuedong, et al. \\\"Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images.\\\" ECCV, 2024.\\n\\n[4] Liu, Andrew, et al. \\\"Infinite nature: Perpetual view generation of natural scenes from a single image.\\\" ICCV. 2021.\\n\\n[5] Yao, Yao, et al. \\\"Blendedmvs: A large-scale dataset for generalized multi-view stereo networks.\\\" CVPR. 2020.\"}",
"{\"title\": \"Response to Reviewer rDog (2/2)\", \"comment\": \"> **I notice that the results on DTU in Figure 5 are from 3 input views, while the original pixelSplat and MVSplat models were trained on paired images. How did the authors adapt them to 3 input views?**\\n> \\n\\nSince PixelSplat and MVSplat predict depths from each viewpoint and transform to fuse with GT poses, it is naturally adaptable to different number of viewpoints. This is also included in their official Github repository. For our baseline, we trained PixelSplat and MVSplat with 3 views for DTU dataset.\\n\\n---\\n\\n> **The proposed framework utilizes plane-sweep volumes and predicts all Gaussians from a canonical feature volume, raising concerns on its reconstruction capability on more challenging input views, such as large camera baselines and occusions. The qualitative results shown in the paper demonstrate small camera movements compared to the input view.**\\n> \\n\\nWhile it is generally acknowledged that plane-sweep volume methods can be sensitive to large camera baselines, we would like to highlight that our backbone leverages matching features [1] and a correlation-based cost volume, effectively addressing these challenges as also addressed in prior works (UFORecon [2], CoPONeRF [3]). To validate our method\\u2019s adaptability to large camera baseline, we have added the qualitative results on large-baseline inputs in **Figure 14** of **Appendix A.4**.\\n\\n[1] Xu, Haofei, et al. \\\"Unifying flow, stereo and depth estimation.\\\"\\u00a0TPAMI,\\u00a02023.\\n\\n[2] Na, Youngju, et al. \\\"UFORecon: Generalizable Sparse-View Surface Reconstruction from Arbitrary and UnFavOrable Data Sets.\\\"\\u00a0CVPR, 2024.\\n\\n[3] Hong, Sunghwan, et al. \\\"Unifying Correspondence, Pose and NeRF for Pose-Free Novel View Synthesis from Stereo Pairs.\\\"\\u00a0CVPR, 2024.\\n\\n---\\n\\n> **I hope the authors can include some discussions on the upper limit and scalability to more diverse datasets of the proposed method.**\\n> \\n\\nAs noted in the shared official response, we acknowledge the importance of extending the proposed method to more diverse datasets and scenarios.\\n\\nRegarding upper limitation, we recognize the challenges in applying our method directly to complex scenarios such as sparse 360-degree input images, as these involve significantly different geometric and visual conditions (e.g., occlusion, extremely low overlap). Recent advances, such as MVSplat360 [1], have shown promise in addressing these scenarios by integrating generative priors for improved 360-degree synthesis. We believe that combining our approach with such methods could offer enhanced performance, particularly for challenging cases involving sparse and wide-baseline inputs.\\n\\nWe appreciate your suggestion and we've included more detailed discussions on these topics in the discussion section of the revised manuscript.\\n\\n[1] Chen, Yuedong, et al. \\\"MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views.\\\"\\u00a0NeurIPS, 2024\\n\\n---\\n\\n> **I suggest the authors to visualize all the input images in Figure 5 and Figure 2 of the supplementary, intead of labeling \\\"Input view (1/3)\\\" on the top. It is hard for readers to measure the view synthesis quality from only one input view.**\\n> \\n\\nWe thank the reviewer for the thoughtful suggestion. In the **Figure 5** of the main paper, we included up to two input views to prevent the image size from becoming too small to read clearly. Meanwhile, the figures in the **Appendix A.4** include all input views. We hope this revision has improved the overall presentation.\"}",
"{\"title\": \"Kind reminder for reviewer-author discussion\", \"comment\": \"Dear Reviewer QjrW,\\n\\nAs the discussion period is ending soon, we wanted to kindly remind you of our responses to your comments. \\nWe truly value your feedback and are happy to answer any remaining questions or concerns you might have.\\n\\nPlease feel free to let us know if there is any more information we can provide to help with the discussion.\\nThank you again for your time and thoughtful review.\\n\\nBest regards, \\n\\nAuthors\"}",
"{\"title\": \"Official comment for Reviewer f2vj\", \"comment\": \"I'm glad to hear that the concerns have been addressed. If you have any additional concerns, please feel free to share them, and I'll be happy to address them promptly.\"}",
"{\"title\": \"Official comment for Reviewer f2vj\", \"comment\": \"As the PDF revision period period is drawing to a close and the discussion deadline is approaching, we would like to kindly remind you of the improvements we made to our paper, thanks to your valuable feedback.\\n\\n- We compared our method's **pose estimation performance** with various pose estimators. This comparison with RayRegression[1], which only learns pose, showed that jointly learning shape with pose shows a synergetic effect, improving pose estimation performance. \\n- We combined pose-dependent baselines (pixelSplat[2], MVSplat[3]) with various state-of-the-art pose estimators, enhancing the **thoroughness of the evaluation and baselines**. This experiment further supports our claim that simply combining pose estimators with pose-dependent methods leads to geometry misalignment, which in turn results in degraded view synthesis performance.\\n- We conducted additional experiments on the ACID[4] and BlendedMVS[5] datasets, demonstrating the **cross-dataset generalizability** of our method.\\n- We included results indicating that **training pose-required methods with estimated poses can lead to instability**, which highlights the importance of our proposed approach.\\n- We made corrections to certain details in the paper, which helped improve its overall presentation.\\n\\nWe sincerely appreciate your careful review, and we\\u2019re glad we could address your concerns. We hope these changes and experimental improvements will be helpful as you finalize your review.\\n\\n---\\n\\n[1] Zhang, Jason Y., et al. \\\"Cameras as Rays: Pose Estimation via Ray Diffusion.\\\", NeurIPS, 2024.\\n\\n[2] Charatan, David, et al. \\\"pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction.\\\" CVPR, 2024.\\n\\n[3] Chen, Yuedong, et al. \\\"Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images.\\\" ECCV, 2024.\\n\\n[4] Liu, Andrew, et al. \\\"Infinite nature: Perpetual view generation of natural scenes from a single image.\\\" ICCV. 2021.\\n\\n[5] Yao, Yao, et al. \\\"Blendedmvs: A large-scale dataset for generalized multi-view stereo networks.\\\" CVPR. 2020.\"}",
"{\"title\": \"Response to Reviewer QjrW (3/3)\", \"comment\": \"> **Can the proposed method tackle higher-resolution input images, such as the original-resolution images in DTU and Tanks and Temples?**\\n> \\n\\nOur method does not impose explicit restrictions on image resolution. However, as we adopt a pixel-aligned Gaussian prediction approach, higher-resolution images inherently result in a larger number of Gaussian primitives, which can lead to increased memory consumption. This characteristic is not unique to our method but is a common limitation of pixel-aligned approaches, including our baselines (PixelSplat, MVSplat).\\n\\nHowever, a key distinction of our method is that the number of Gaussians remains fixed regardless of the number of input views, as our method only predicts the Gaussians from the canonical view with the fused features. This suggests that, for high-resolution reconstructions requiring multiple views, our approach handles the task more efficiently with less Gaussians required compared to the baselines.\"}",
"{\"title\": \"Common Comments\", \"comment\": \"We sincerely appreciate the reviewers for their insightful comments and constructive feedback, which have significantly enhanced the clarity and depth of our work. Below, we address the comments that are highly relevant to all reviewers, while reviewer-specific feedback is addressed individually.\\n\\n---\\n\\n### Cross-Dataset Generalization\\n\\nAll reviewers note the lack of the experiments on dataset generalizability, including large-scale or cross-dataset generalization performance. To address this, we conducted cross-dataset experiments, evaluating our model trained with RealEstate10K dataset on ACID[1] dataset, and our model trained with DTU on BlendedMVS[2] dataset, following established practices in the field. \\n\\n| | | **RealEstate10K \\u2192 ACID** | | | **DTU \\u2192 BlendedMVS** | | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| Method | Pose | **PSNR\\u2191** | **SSIM\\u2191** | **LPIPS\\u2193** | **PSNR\\u2191** | **SSIM\\u2191** | **LPIPS\\u2193** |\\n| **PixelSplat** | GT | 26.84 | 0.81 | 0.18 | 11.64 | 0.20 | 0.67 |\\n| | \\u03c3 = 0.01 | 21.73 | 0.57 | 0.28 | 11.65 | 0.20 | 0.68 |\\n| **MVSplat** | GT | 28.18 | 0.84 | 0.15 | 12.04 | 0.19 | 0.56 |\\n| | \\u03c3 = 0.01 | 21.65 | 0.57 | 0.27 | 11.92 | 0.20 | 0.59 |\\n| **Ours** | - | 23.47 | 0.69 | 0.26 | 12.19 | 0.26 | 0.61 |\\n\\nAs shown in the table, our method exhibits strong generalizability, performing comparably to or even surpassing the baselines that utilize GT poses. We also compared the baseline methods with minimal gaussian noise level (sigma=0.01), where rotation and translation angular errors are far lower than the state-of-the-art pose estimators. We included the comprehensive quantitative (**Table 6**) and qualitative results (**Figure 11**) in the **Appendix A.4.**\\n\\n---\\n\\n[1] Liu, Andrew, et al. \\\"Infinite nature: Perpetual view generation of natural scenes from a single image.\\\"\\u00a0ICCV. 2021.\\n\\n[2] Yao, Yao, et al. \\\"Blendedmvs: A large-scale dataset for generalized multi-view stereo networks.\\\"\\u00a0CVPR. 2020.\"}",
"{\"comment\": \"Dear Reviewer RF3w,\\n\\nThank you for your thoughtful consideration and detailed discussion.\\nYour feedback has been invaluable in helping us refine our contributions throughout the discussion process.\\n\\nWe believe our work makes significant advancements in pose-free 3D scene modeling by effectively leveraging multi-view information without relying on additional geometric prior.\\n\\nIf you have any further questions, please let us know. \\nWe would be happy to provide additional clarifications or results.\\n\\nBest regards,\\n\\nAuthors\"}",
"{\"title\": \"Response to Reviewer rDog (1/2)\", \"comment\": \"> **Insufficient baselines and experiments**\\n\\nWe appreciate the reviewer\\u2019s suggestion to include a comparison with additional baselines.\\n\\nWe have compared our work with the concurrent work Splatt3R[1], which utilizes pre-trained MASt3R[2] weights for geometry estimation. We observed that Splatt3R faces a significant scale-ambiguity issue when applied to out-of-distribution data not seen during the training of MASt3R. The estimated scale of the reconstructed point clouds often misaligns with the scale of the camera poses for novel view rendering.\\n\\nTo address this, we attempted to fine-tune Splatt3R on the target datasets (RealEstate10K and DTU) using photometric loss. However, this approach led to convergence issues, with the model output blurry reconstructions. This behavior can be attributed to Splatt3R's reliance on geometry estimation from MASt3R, which requires ground-truth dense depths to mitigate the scale-ambiguity issue. Unfortuantely, our target datasets present challenges in this regard: RealEstate10K lacks ground-truth depths, and DTU provides only sparse, masked depth maps, making it difficult to adapt Splatt3R directly without significant modifications.\\n\\n- Table A\\n| DTU | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---|---|-----|----|\\n| Splatt3R | 11.78 | 0.28 | 0.57 |\\n| Ours | **17.50** | **0.34** | **0.48** |\\n- Table B\\n| RealEstate10K | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|----|----|----|---|\\n| Splatt3R | 15.80 | 0.53 | 0.30 |\\n| Ours | **21.23** | **0.71** | **0.26** |\\n- Table C\\n| **ACID** | **Training Data** | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n|---|---|---|--|---|\\n| Splatt3R | ScanNet++ | 17.49 | 0.63 | **0.26** |\\n| Ours | RealEstate10K | **23.47** | **0.69** | **0.26** |\\n\\nTo provide a fair baseline, we evaluated the pre-trained Splatt3R model (trained on ScanNet++) directly on our datasets under its original training conditions. We included both in-dataset (Table A and B) and cross-dataset (Table C) generalization tests. We have included these results in the supplementary material, **Appendix A.4 (Table 7, 8, Figure 12, 13)**, with a detailed discussion of the experimental settings, evaluation metrics, and qualitative visualizations.\\n\\nTables A and B show that our method significantly outperforms both datasets by a large margin. In addition, to discuss the cross-dataset generalization quality, we tested our method (trained on RealEstate10K) and Splatt3R (trained on ScanNet) on the ACID[3] dataset. The table shows that our method shows superior performance in all metrics, underscoring the robustness and generalizability of our approach. These results validate our method\\u2019s effectiveness in pose-free multi-view reconstruction, even in challenging scenarios without ground-truth depth supervision.\\n\\n[1] Smart, Brandon, et al. \\\"Splatt3r: Zero-shot gaussian splatting from uncalibrated image pairs.\\\" arXiv.\\u00a02024.\\n\\n[2] Leroy, Vincent, Yohann Cabon, and J\\u00e9r\\u00f4me Revaud. \\\"Grounding image matching in 3d with mast3r.\\\"\\u00a0ECCV, 2024.\\n\\n[3] Liu, Andrew, et al. \\\"Infinite nature: Perpetual view generation of natural scenes from a single image.\\\"\\u00a0*ICCV. 2021.\\n\\n---\\n\\n> **Potentially unfair comparison with pixelSplat and MVSplat**\\n\\nWe understand the reviewer\\u2019s concerns and have included the evaluation results of the baselines with the pose inferred from various state-of-the-art pose estimators including DUSt3R [1] and MASt3R [2].\\n\\n|Method|Pose|Rot. \\u2193|Trans. \\u2193|PSNR \\u2191|SSIM \\u2191|LPIPS \\u2193|\\n|---|---|---|---|---|---|---|\\n| **PixelSplat** | GT | - | - | 20.96 | 0.65 | 0.31 |\\n| | COLMAP | 7.10 | 31.62 | 13.49 | 0.34 | 0.66 |\\n| | MASt3R | 2.40 | **3.52** | 15.69 | 0.40 | 0.50 |\\n| | DUSt3R | **1.77** | 13.66 | 15.98 | 0.42 | 0.47 |\\n| | Ours | 2.74 | 6.28 | 13.29 | 0.31 | 0.66 |\\n| **MVSplat** | GT | - | - | 21.00 | 0.69 | 0.24 |\\n| | COLMAP | 7.10 | 31.62 | 14.69 | 0.44 | 0.46 |\\n| | MASt3R | 2.40 | **3.52** | 13.31 | 0.31 | 0.58 |\\n| | DUSt3R | **1.77** | 13.66 | 13.22 | 0.32 | 0.58 |\\n| | Ours | 2.74 | 6.28 | 14.08 | 0.33 | 0.51 |\\n| **Ours** | | 2.74 | 6.28 | **19.94** | **0.63** | **0.28** |\\n\\nWhile some pose estimation methods, such as MASt3r, demonstrate higher pose estimation accuracy compared to our approach, the rendering quality using their estimated poses combined with baselines (e.g., PixelSplat, MVSplat) falls significantly short of the results achieved with our method. To ensure a fairer comparison, we have updated the pred-pose baseline in our paper (**Table 1, 2**) to utilize poses from DUSt3R[1], which generally achieve better performance on DTU and re10k datasets. \\n\\nWe have also included a detailed discussion on the implementation of the baselines in **Appendix A.2** of the revised manuscript. We hope this additional clarification addresses your concerns regarding the comparison with the baselines.\\n\\n\\n[1] Wang, Shuzhe, et al. \\\"Dust3r: Geometric 3d vision made easy.\\\"\\u00a0CVPR, 2024.\\n\\n[2] Leroy, Vincent, Yohann Cabon, and J\\u00e9r\\u00f4me Revaud. \\\"Grounding image matching in 3d with mast3r.\\\"\\u00a0ECCV, 2024.\"}",
"{\"title\": \"Response to Reviewer RF3w (2/3)\", \"comment\": \"> **How is the cost volume $C_i$ transformed into pose-aware cost-volume $C_i\\u2019$? Cost volumes directly calculated from different views should not be directly added. Is there any operation, such as alignment, to make them additive?**\\n> \\n\\nTo address the additivity of cost volumes $\\\\{C_i\\\\}$ from different views, we perform a refinement step through cost aggregation, which is conditioned on predicted Pl\\u00fccker rays. Specifically, we employ a transformer-based 2D U-Net augmented with cross-attention layers, where the predicted rays serve as key-value pairs and the cost volumes act as queries. This mechanism embeds pose awareness into the cost volumes by utilizing the geometric guidance provided by the rays.\\n\\n---\\n\\n> **I would also appreciate it if the authors can provide more details about the $V_f$ and Fig.4.**\\n> \\n\\nWe sincerely thank the reviewer for the suggestion regarding the unclear aspects of our method. We construct the global canonical volume $V_g$ as described in **Equation 3** of the paper. $V_g$ is used to estimate the anchor points, which represent a coarse structure downscaled by a factor of 4 relative to the original image resolution. Simultaneously, we build the feature volume $V_f$ in the same manner, but with upscaled features, to estimate the offset vectors and Gaussian parameters for fine detailed reconstruction. We hope this explanation addresses your concerns. \\n\\nAdditionally, we revised and clarified the relevant sections of the paper including **Fig.4** and **Section 4.3** to ensure a clearer understanding for all readers. Thank you for bringing this to our attention.\\n\\n---\\n\\n> **The necessity of using Plucker rays for camera poses should be further confirmed through some ablation experiments. For example, can we directly regress the camera poses with GT poses?**\\n> \\n\\nThe Pl\\u00fccker ray representation is a fundamental component of our pipeline, as it enables seamless integration of camera poses into the multi-view feature aggregation process. While using a 6D pose representation could lead to an alternative option, our method builds upon the established assumption from Cameras as Rays [1] that ray-based representations offer advantages for learning. Specifically, Cameras as Rays reports improved stability and accuracy when using ray-based methods, as evidenced by the comparison of \\\"R+T Regression\\\" and \\\"Ray Regression\\\" in Tables 1 and 2 of their paper.\\n\\nAdditionally, we would like to respectfully note that directly regressing 6D poses would not constitute a fair ablation of the ray-based representation, as it would involve not only altering the pose representation but also redesigning the embedding strategy.\\n\\n[1] Zhang, Jason Y., et al. \\\"Cameras as Rays: Pose Estimation via Ray Diffusion.\\\", NeurIPS, 2024.\"}",
"{\"title\": \"Response to Reviewer f2vj (1/2)\", \"comment\": \"> **The main concern is the lack of comparison with state-of-the-art (SOTA) pose estimation methods like COLMAP, DUSt3R[1], and MASt3R[2].**\\n> \\n\\n| Method | Rot. \\u2193 | Trans. \\u2193 |\\n| --- | --- | --- |\\n| COLMAP | 7.10 | 31.62 |\\n| Relpose++ | 19.56 | 44.18 |\\n| RayRegression | 3.10 | 6.57 |\\n| DUSt3R | 1.77 | 13.66 |\\n| MASt3R | 2.40 | 3.52 |\\n| **Ours** | 2.74 | 6.28 |\\n\\nRegarding pose estimation, we compare our method with COLMAP[1], Relpose++[2], and RayRegression from the Cameras as Rays[3] as well as Mast3R [4] an4d Dust3R [5], which serve as foundational 3D reconstruction models for reference. While Mast3R and Dust3R demonstrate superior pose estimation performance, they rely on ground-truth dense depth maps and are trained on large-scale datasets. In contrast, our method and the other compared approaches are trained solely on the DTU train sets. We also emphasize that our primary objective is to advance pose-free reconstruction by minimizing reliance on accurate pose information.\\n\\nOne of the key findings is that the **joint training of pose estimation and 3D Gaussians with embedded estimated poses** plays a crucial role in leveraging a multi-view geometry prior, improving quality both in rendering and pose estimation. This joint optimization process enhances the overall robustness and generalizability of our method, particularly in scenarios with limited or no ground-truth depth annotations. These results show the effectiveness of our approach in achieving accurate pose estimation as a byproduct of our pose-free rendering framework. \\nWe added the discussion in **Appendix** **A.4**. \\n\\n[1] Schonberger, Johannes L., and Jan-Michael Frahm. \\\"Structure-from-motion revisited.\\\" CVPR, 2016\\n\\n[2] Lin, Amy, et al. \\\"Relpose++: Recovering 6d poses from sparse-view observations.\\\" *3DV*, 2024.\\n\\n[3]Zhang, Jason Y., et al. \\\"Cameras as Rays: Pose Estimation via Ray Diffusion.\\\", NeurIPS, 2024.\\n\\n[4] Wang S, Leroy V, Cabon Y, et al. \\u201cDust3r: Geometric 3d vision made easy\\u201d CVPR, 2024\\n\\n[5]: Leroy V, Cabon Y, Revaud J. \\u201cGrounding Image Matching in 3D with MASt3R\\u201d, ECCV 2024\\n\\n---\\n\\n> **The proposed method should compare baselines like camera poses from COLMAP or DUSt3R plus MVSplat/PixelSplat. While I expect COLMAP may not perform very well given the sparse image input, DUSt3R/MASt3R is promising to give relatively accurate pose estimiation, as the paper InstantSplat[3] shows.**\\n> \\n\\n> **It's acceptable that the method is unable to outperforms pixelsplat/ MVSplat with GT pose assumption, since it's imfeasible to obtain such accurate poses using sparse image input. But as mentioned in the weekness, we need to see whether it can outperform other methods using SOTA camera pose estimation.**\\n> \\n\\nWe appreciate the reviewer\\u2019s insightful concern and fully recognize the importance of a thorough comparison. To address this, we evaluated PixelSplat and MVSplat using state-of-the-art pose estimators, DUSt3R[1] and MASt3R[2].\\n\\n| Method | Pose | Rot. \\u2193 | Trans. \\u2193 | PSNR \\u2191 | SSIM \\u2191 | LPIPS \\u2193 |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| **PixelSplat** | **GT** | - | - | 20.96 | 0.65 | 0.31 |\\n| | COLMAP | 7.10 | 31.62 | 13.49 | 0.34 | 0.66 |\\n| | MASt3R | 2.40 | **3.52** | 15.69 | 0.40 | 0.50 |\\n| | DUSt3R | **1.77** | 13.66 | 15.98 | 0.42 | 0.47 |\\n| | Ours | 2.74 | 6.28 | 13.29 | 0.31 | 0.66 |\\n| **MVSplat** | **GT** | - | - | 21.00 | 0.69 | 0.24 |\\n| | COLMAP | 7.10 | 31.62 | 14.69 | 0.44 | 0.46 |\\n| | MASt3R | 2.40 | **3.52** | 13.31 | 0.31 | 0.58 |\\n| | DUSt3R | **1.77** | 13.66 | 13.22 | 0.32 | 0.58 |\\n| | Ours | 2.74 | 6.28 | 14.08 | 0.33 | 0.51 |\\n| **Ours** | | 2.74 | 6.28 | **19.94** | **0.63** | **0.28** |\\n\\nAs shown in the table, our method outperforms these baseline combinations with state-of-the-art pose predictors. Additionally, as highlighted in **Table 1** and **Table 2** of our main paper, pose-dependent generalization methods are sensitive to even subtle pose errors (\\u03c3=0.01), which is a lower error margin than typical SOTA predictors achieve. We added the discussion in **Appendix** **A.2**. We hope this experiment addresses your concern, and we remain open to providing further clarifications if necessary. \\n\\n[1] Wang S, Leroy V, Cabon Y, et al. \\u201cDust3r: Geometric 3d vision made easy\\u201d CVPR, 2024\\n\\n[2]: Leroy V, Cabon Y, Revaud J. \\u201cGrounding Image Matching in 3D with MASt3R\\u201d, ECCV 2024\"}",
"{\"comment\": \"Thanks for your response! Most of my concerns have been adressed. I still have some questions.\\n1. I still think the 3D respresentation of this work is built upon Scaffold-GS, this representation improves the performance a lot (Table 3). Therefore, the perfomance improvement of this work may be attributed to the Scaffold-GS representation. I know this work also made some modifications, however, it is better to introduce Scaffold-GS as a preliminary first.\\n\\n2. For W3, I cannot find cross-dataset experimental results. In addition, when generalizing the two trained models to other datasets, how to choose one of them to test ACID or BlendedMVS?\\n\\n3. For the noisy poses, it is better to compare with SPARF, which uses noisy poses to train NeRF and provides an effiective way to alleviate the pose noises.\\n\\n4. For the ablation for the mean-var volume, for the ours result (PSNR: 19.09, SSIM: 0.64, LPIPS: 0.29), I cannot find correspondint resutls in the main text. What is the experimental setting for this ablation?\"}",
"{\"title\": \"Response to Reviewer QjrW (1/3)\", \"comment\": \"> **The proposed method relies on cost volume construction, which requires depth range priors.**\\n> \\n\\nWe appreciate the reviewer\\u2019s insightful comment. While having accurate depth range priors can indeed enhance reconstruction quality, our approach does not necessarily rely on strict depth range constraints. For instance, in the RealEstate10K dataset, where ground-truth depth ranges are unavailable, we employed a broad range of 1 to 100, demonstrating the robustness of our method to varying scales. This flexibility underscores the generalizability of our framework across various scenes without the need for precise prior depth range information.\\n\\n---\\n\\n> **Moreover, can you discuss the limitation of the proposed unable on tackling unbounded\\u00a0360\\u2218\\u00a0scenes?**\\n> \\n\\nWe fully recognize the growing need and demand for addressing more complex scenarios, such as 360-degree input images. Recent advancements, such as MVSplat360 [1], have made significant progress in tackling these challenges. We believe that our method can be integrated with such approaches, offering the potential to further enhance solutions for these demanding cases. We appreciate your suggestion and will include more detailed discussions on these limitations in the revised manuscript.\\n\\n[1] Chen, Yuedong, et al. \\\"MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views.\\\"\\u00a0NeurIPS, 2024\\n\\n---\\n\\n> **In fact, the anchor point idea used in this work is proposed by Scaffold-GS [1]. Can you clarify the difference between your use and Scaffold-GS? Maybe it is better to present a preliminary to introduce it as the scene representation.**\\n> \\n\\nWe sincerely thank the reviewer for bringing up Scaffold-GS [1] and appreciate the opportunity to clarify the distinctions between our approach and theirs. While both methods utilize anchor points, their objectives and feature characteristics differ fundamentally.\\n\\nIn Scaffold-GS, anchor points are voxelized centers derived from SfM reconstructions, designed to enhance local fidelity by constraining Gaussian primitives to localized offsets. This process relies on iterative, per-scene optimization to align with pseudo ground-truth structures, focusing on improving local accuracy for specific scenes.\\n\\nIn contrast, our method predicts pixel-aligned anchor points and their corresponding features in a feed-forward, data-driven manner, bridging 2D image information to 3D scene representation. Unlike Scaffold-GS, our approach generalizes to unseen scenes without iterative optimization and models Gaussian primitives that capture global scene structures, supporting multiple views effectively.\\n\\nThese distinctions underline the fundamental differences in anchor point usage and methodology. If there\\u2019s any additional concerns regarding this, we are glad to further discuss on this topic. \\n\\n[1] Lu, Tao, et al. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. CVPR, 2024.\\n\\n---\\n\\n> **The proposed method are trained on DTU and RealEstate10K, respectively. Then, the trained models are used to test the corresponding datasets. This cannot verify the generalizable ability of the proposed method. Can you test the proposed method on RealEstate10K with the model trained on DTU, and test the proposed method on DTU with the model trained on RealEstate10K?**\\n> \\n\\n> **For the generalizable ability, can the model trained on one dataset generalize to different datasets? For example, can the model trained on RealEstate10K be used to test Tanks and Temples datasets [2]?**\\n> \\n\\nWe have addressed this cross-dataset generalization evaluation in the shared official comment. As in the comment, we evaluated SHARE trained on RealEstate10K and tested it on ACID. In addition, we tested cross-dataset generalization from DTU to BlendedMVS datasets. Since the difference distribution and the number of scenes between DTU ( < 100 scenes of object-centric data with black background) and RealEstate10K (60K+ large-scale of indoor and outdoor scenes). We hope this provides clarity, and we are happy to address any further questions or concerns.\"}",
"{\"title\": \"Response to the authors\", \"comment\": \"Thanks for the detailed response and the additional experiments. Could you clarify why you mentioned that *Mast3R* and *Dust3R* rely on ground-truth dense depth maps? From my understanding, their methods are capable of obtaining camera poses solely from RGB inputs.\\nAdditionally, I understand that training *PixelSplat* or *MVSplat* with added noises may result in unstable training. However, is it possible to train them using the estimated poses from *MASt3R* or *DUSt3R*? I assume the last table you provided uses their poses only during inference, which might cause the different pose distribution from training pose distribution. My other concerns are resolved.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for your detailed responses. However, I must raise a concern regarding the experimental comparison with Splatt3R. Since MASt3R generates point clouds with an arbitrary scale factor, proper evaluation requires aligning ground truth camera poses with the scale of estimated camera poses for novel view rendering. Based on Splatt3R's reported metrics and the visualizations presented in Figures 12 and 13, it appears this camera pose alignment step has been omitted from your evaluation protocol. This oversight could significantly impact the fairness of the comparison, making the current experimental results and your analysis not convincing enough.\"}",
"{\"summary\": \"In this work, the authors predict a framework for pose-free generaliable 3DGS primitives prediction. By jointly predicting camera poses described with Plucker rays and injecting them into the Canonical Volume Construction, the framework can integrate multi-view features into geometry volume and feature volume under a single canonical view, which would be used for subsequent Gaussian primitive prediction through MLPs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The idea of introducing Plucker ray for pose representations and multi-view fusion guidance is useful;\\n 2. The Anchor-aligned Gaussian prediction by integrating cost volumes from multiple views into a single canonical view is a interesting idea;\\n 3. The evaluations on DTU and RealEstate10K datasets confirm that the proposed method can predict higher quality 3DGS primitives under the pose-free setting;\", \"weaknesses\": \"The main weaknesses of this work may include the lack of some critical details and discussions about related works. Please check the questions section for my problems about the details.\\nAs for the lack of discussions, the idea of introducing Plucker ray maps to represent camera poses has been introduced in CAT3D[1]. The authors should discuss about their differences, at least.\\nAs for the pose-free generalizable prediction of 3DGS primitives, Splat3R[2] also proposes another effective solution by estimating the camera poses through DUST3R[3]. Some related discussions and comparisons should be necessary to validate the effectiveness of this work.\\n[1] Cat3d: Create anything in 3d with multi-view diffusion models\\n[2] Splatt3r: Zero-shot gaussian splatting from uncalibarated image pairs\\n[3] Dust3r: Geometric 3d vision made easy\", \"questions\": \"1. How is the cost volume $C_i$ transformed into pose-aware cost-volume $C_i'$? Cost volumes directly calculated from different views should not be directly added. Is there any operation, such as alignment, to make them additive? I would also appreciate it if the authors can provide more details about the $V_f$ and Fig.4.\\n 2. The necessity of using Plucker rays for camera poses should be further confirmed through some ablation experiments. For example, can we directly regress the camera poses with GT poses?\\n 3. How is the ground truth Plucker rays acquired? Are they acquired with the original camera poses?\\n 4. How is the performances on more challenging dataset, e.g., Scannet? Besides, I am also curious about the number of Gaussian primitives acquired by this framework. As only one canonical view is used for prediction, would the number of predicted primitives less than other methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
E9NQUvbsT1 | Task and Model Agnostic Differentially Private Graph Neural Networks via Coarsening | [
"Anuj Kumar Sirohi",
"Anjali Gupta",
"Sandeep Kumar",
"Amitabha Bagchi",
"Sayan Ranu"
] | Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing graph-structured data, deriving representations by aggregating information from neighboring nodes. However, this aggregation process inherently increases the risk of exposing confidential data, as a single node may influence the inference process for multiple nodes simultaneously. To mitigate this risk, researchers have explored differentially private training methods for GNN models. Existing privacy-preserving approaches, however, face significant challenges. They often incur high computational costs during training or struggle to generalize across various GNN models and task objectives. To address these limitations, we introduce Differentially Private Graph Coarsening (DPGC), a novel method that tackles two key challenges in GNN training: scalability and privacy guarantees that are independent of the downstream task or GNN model. Through comprehensive experiments on six datasets across diverse prediction tasks, we demonstrate that DPGC sets new benchmarks in graph coarsening. Our method achieves superior compression-accuracy trade-offs while maintaining robust privacy guarantees, outperforming state-of-the-art baselines in this domain. | [
"Graph Neural Network (GNN)",
"Differential Privacy (DP)",
"Graph Coarsening"
] | Reject | https://openreview.net/pdf?id=E9NQUvbsT1 | https://openreview.net/forum?id=E9NQUvbsT1 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"ufj0iA24ZS",
"u2ExJMRHRy",
"sT063TIM5k",
"hzluCyPbK3",
"hUuLaCsSxi",
"ac8mRMLKTF"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_comment",
"meta_review",
"official_review"
],
"note_created": [
1729963992489,
1730626083512,
1737524044295,
1732786928559,
1734608871214,
1729863604606
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10356/Reviewer_V4vp"
],
[
"ICLR.cc/2025/Conference/Submission10356/Reviewer_zxEg"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission10356/Area_Chair_HoJc"
],
[
"ICLR.cc/2025/Conference/Submission10356/Area_Chair_HoJc"
],
[
"ICLR.cc/2025/Conference/Submission10356/Reviewer_RXHP"
]
],
"structured_content_str": [
"{\"summary\": \"The author used the graph coarsening technique to help to deal with the high computational costs during training or struggle to generalize across various GNN models and task objectives. The method achieves superior compression accuracy trade-offs while maintaining robust privacy guarantees, outperforming state-of-the-art baselines in this domain.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The idea is very interesting.\\n2. The privacy guarantee on DP technique is very solid.\", \"weaknesses\": \"1. The two challenges that proposed to solve by the author: scalability and privacy guarantees are not related with each other.\\n2. DP is meant to deal with the privacy attack. However, this does not mentioned in the paper. Attacks including the poisoning attacks, and this cannot be solved by the technique in the paper.\", \"questions\": \"See the weaknesses listed above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes Differentially Private Graph Coarsening (DPGC), a method with strong generalizability that can be applied to all downstream GNNs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and easy to follow, with a logical flow that enhances comprehension. The experiment details are re clearly demonstrated and the code is released, ensuring that the methodology is transparent and reproducible, which further reinforces the reliability of the study.\", \"weaknesses\": \"The paper claims that the proposed private GNN meets node-DP; however, node-level DP in GNNs requires the protection of node features, all links, and node labels. It appears that the proposed method does not provide protection for node labels. This results in two major concerns:\\n\\n1. The claim of satisfying node-level DP is not valid, as label protection is essential to uphold this standard.\\n2. The experiments presented in Table 3 are potentially unfair. Methods such as DP-MLP, DP-GNN, GAP, PrivGNN, and DPAR include protection for node labels, which raises concerns about the comparability of the results. The observed performance improvement might be attributed to the lack of label protection, making it unclear whether the reported gains are due to the omission of label protection.\", \"questions\": \"Please see weakness. If my concerns are resolved, I am willing to raise the score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"I would like to encourage the reviewers to engage with the author's replies if they have not already done so. At the very least, please\\nacknowledge that you have read the rebuttal.\"}",
"{\"metareview\": \"The authors propose a method for releasing coarsened graphs with DP guarantees, combining a diffusion step (WL-kernel) with noisy LSH-based clustering. Reviewer RXHP claims that the privacy analysis is incorrect on various levels and that \\\"the overall procedure does in fact not provide any form of edge- or node-level privacy\\\". I agree with this assessment. Reviewer zxEg also states that the \\\"claim of satisfying node-level DP is not valid\\\".\", \"additional_comments_on_reviewer_discussion\": \"The authors did not make an attempt to address the reviewers concerns (did not reply.\"}",
"{\"summary\": \"The submission proposes a differentially private procedure for releasing coarsened graphs, which can then be used for downstream tasks like graph neural network training.\\n\\nThe coarsening procedure, which maps a graph to a smaller graph in which each node corresponds to a set of nodes in the original graph, involves multiple steps. First, node attributes are embedded via a diffusion step with skip connections (\\\"WL-Kernel\\\"). Then, these embeddings are clustered via locality sensitive hashing (LSH), with each cluster corresponding to a node in the coarsened graph. A new adjacency is defined in the usual manner for edge contractions, i.e., two clusters are connected if any of their components are connected. Finally, a new attribute matrix is determined via gradient-based optimization of an objective that enforces similar energy to the original graph, i.e., edge-weighted squared differences between attributes.\\n\\nTo achieve edge- or node-level differential privacy, calibrated Gaussian noise is added (1) after the LSH projection function and (2) to the final attribute matrix.\\n\\nFinally, the proposed method is evaluated by comparing its privacy-utility tradeoff to differentially private GNN architectures (e.g. GAP) and graph-specific variations of DP-SGD (e.g. PrivGNN) in an inductive node classification setting. In addition, the effectiveness of a membership inference attack on these approaches is tested and the proposed coarsening procedure is benchmarked against prior work on graph coarsening.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The concept of releasing an entire graph with formal node-level privacy guarantees, rather than trying to develop GNN-specific procedures, is exciting and well-motivated\", \"The individual components are justified via references to earlier theoretical works on, e.g., GNN expressivity or graph spectral similarity\", \"Even ignoring the differential privacy aspect, the coarsening procedure appears to represent an improvement over prior work, see Tables 1&6 (I am not sufficiently familiar with coarsening literature to make a definite statement on this).\", \"The experimental evaluation rigorously explores the space of DP-GNN baselines, coarsening methods, and downstream GNNs.\", \"Results are reported with standard deviations.\"], \"weaknesses\": \"### Primary weakness\\nThe main weakness of the proposed work is that **the privacy analysis is incorrect on various levels**. Specifically, (1) the privacy guarantees for the proposed procedure's individual components are overly optimistic or incorrect, (2) privacy leakage of the composition of these components is underestimated and (3) **the overall procedure does in fact not provide any form of edge- or node-level privacy** (contrary to claims in the paper).\\n\\n* The privacy analysis of the noisy LSH step reuses a result from [1], whose sensitivity analysis assumes that a single row in the input matrix changes. However, due to the WL-kernel diffusion, changes to a single row in the attribute matrix can change all rows in the embedding matrix that is projected by LSH (e.g., in a complete graph). The added noise is thus too small. A valid analysis would require using the group privacy property with group size $N$.\\n* The privacy analysis for the final attribute matrix relies on the claim that the involved optimization problem of the form $\\\\min_\\\\tilde{X} f(\\\\alpha, X, \\\\tilde{X})$ had sensitivity $\\\\alpha$. It is not clear why the solution of this non-linear minimization problem should have sensitivity $\\\\alpha$.\\n* The proposed DPGC procedure does not actually solve this optimization problem in closed form (which the privacy analysis of the manuscript assumes), but uses gradient descent. Since the considered neighboring relation does not constrain how much attribute matrix $X$ can change, each gradient can change arbitrarily, i.e., the global sensitivity is $\\\\infty$.\\n* In addition, the analysis ignores that each gradient step accesses the private attribute matrix, meaning the privacy guarantees should weaken with the number of steps. A valid analysis would require use of differentially private stochastic gradient descent, alongside composition or amplification-by-iteration analysis.\\n* Assuming the previous analysis were correct, the LSH and the attribute learning step would each be $(\\\\epsilon,\\\\delta)$. This does not imply that the sequential composition of these steps is $(\\\\epsilon,\\\\delta)$ DP. One needs to apply some composition theorem, e.g., [2].\\n* **The adjacency of the coarsened graphs is given by a contraction of the original adjacency. No steps are undertaken to prevent leakage of the adjacency matrix through the contraction operation.** For instance, it is trivial for an adversary to distinguish a graph with $0$ edges and a graph with $1$ edge (both of which are considered neighboring in edge- and node-level DP).\\n\\n### Other weaknesses\\n* The discussion of prior work on differentially private GNNs focuses on methods that attempt to learn private embeddings. It omits works on edge-level DP that, similar to this work, focus on making the input graph/edges themselves private, e.g., [4] or LapGraph from [5]. It also omits works that use DP versions of personalized pagerank to construct a privacy-preserving adjacency matrix, e.g., [6], [7].\\n* The manuscript does not discuss how to determine labels for the coarsened graph and does not propose a procedure for ensuring privacy of the nodes' labels.\\n* Parts of the main Figure 2 are not representative of the proposed method. Specifically, the \\\"lock\\\" symbol above \\\"5. Supernode's edge assignment\\\" suggests that there was some procedure that protected adjacency information, which is not the case.\\n\\n### Minor comments\\n* It would be nice to provide a definition of node-level privacy (like the one for edge-level privacy in ll.144-146). Some works only assume that the number of nodes is constant and only the features and edges of a single node change arbitrarily, while other works assume that nodes can be entirely removed.\\n* The method appears to be limited to the inductive setting, where we have a separate training graph that is coarsened for DP training. It is unclear how this method can be applied to the more common transductive setting. That is, how we can provide predictions for a partially labelled original graph $G$ after training a model on a coarsened version $\\\\tilde{G}$ of $G$?\\n* The results for GAP are identical in the node-DP (Table 3) and edge-DP (Table 4) setting. However, since its privacy guarantees are stronger for edge-DP, the accuracies in the edge-DP setting at any given privacy budget should be higher.\\n* The GAP baseline has been superseded by ProGAP [3], which enables multiple message passing steps. One may want to include it as a baseline (I do not expect the authors to do this for the rebuttal).\\n* The chosen delta ($2 \\\\times 10^{-3}$) is quite large, considering that the considered datasets have over $10^3$ nodes. Usually, one would choose $\\\\delta \\\\ll 1 \\\\mathbin{/}N$.\\n\\n---\\n\\n[1] Kenthapadi et al. Privacy via the Johnson-Lindenstrauss transform. Journal of Privacy and Confidentiality. 2013. \\n[2] Kaiorouz et al. The Composition Theorem for Differential Privacy. ICML 2014. \\n[3] Sajadmanesh et al. ProGAP: Progressive Graph Neural Networks with Differential Privacy Guarantees. WSDM 2024. \\n[4] Vu et al. Privacy-Preserving Visual Content Tagging using Graph Transformer Networks. MM 2020. \\n[5] Wu et al. LINKTELLER: Recovering Private Edges from Graph Neural Networks via Influence Analysis. 2022 IEEE Symposium on Security and Privacy (SP). \\n[6] Epasto et al. Differentially Private Graph Learning via Sensitivity-Bounded Personalized PageRank. NeurIPS 2022. \\n[7] Wei at al. Differentially Private Graph Diffusion with Applications in Personalized PageRanks. NeurIPS 2024. \\n\\n---\\n\\nGiven the listed weaknesses, specifically the lack of privacy protection, **I recommend rejection**.\\nI nevertheless think that the underlying coarsening method could be a meaningful contribution to the field of graph machine learning.\\nI would encourage the authors to either focus exclusively on coarsening without DP, or to try and eliminate the remaining sources of privacy leakage before resubmitting to another venue.\", \"questions\": [\"How are labels for the coarsened graph determined in your experiments?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
E9GakjQype | AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs | [
"Anselm Paulus",
"Arman Zharmagambetov",
"Chuan Guo",
"Brandon Amos",
"Yuandong Tian"
] | While recently Large Language Models (LLMs) have achieved remarkable successes, they are vulnerable to certain `jailbreaking attacks` that lead to generation of inappropriate or harmful content. Manual red-teaming requires finding adversarial prompts that cause such jailbreaking, e.g. by appending a suffix to a given instruction, which is inefficient and time-consuming.
On the other hand, automatic adversarial prompt generation often leads to semantically meaningless attacks that can easily be detected by perplexity-based filters, may require gradient information from the TargetLLM, or do not scale well due to time-consuming discrete optimization processes over the token space. In this paper, we present a novel method that uses another LLM, called the `AdvPrompter`, to generate human-readable adversarial prompts in seconds, $\sim800\times$ faster than existing optimization-based approaches.
We train the AdvPrompter using a novel algorithm that `does not require gradients` of the TargetLLM. This process alternates between two steps: (1) generating high-quality target adversarial suffixes by optimizing the AdvPrompter predictions, and (2) fine-tuning of the AdvPrompter with the generated adversarial suffixes. The trained AdvPrompter generates suffixes that veil the input instruction without changing its meaning, such that the TargetLLM is lured to give a harmful response. Experimental results on popular open source TargetLLMs show state-of-the-art results on the AdvBench dataset, that also transfer to closed-source black-box LLM APIs. Further, we demonstrate that by fine-tuning on a synthetic dataset generated by AdvPrompter, LLMs can be made more robust against jailbreaking attacks while maintaining performance, i.e. high MMLU scores. | [
"adversarial attacks",
"prompt optimization",
"red-teaming LLMs"
] | Reject | https://openreview.net/pdf?id=E9GakjQype | https://openreview.net/forum?id=E9GakjQype | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"t1cQEEMBoy",
"s2QaO38zOn",
"qppEzcbZLG",
"pNnDH10A6M",
"p84p9Rl8Ef",
"ogw8OGlPlO",
"jMchU0XKbC",
"OHmR78l8Ql",
"KV6isH4fYi",
"Gnysvegl0H",
"D4eovRF27c",
"8jYQSMvR3K",
"7MSxndCE9q",
"496MdKrPR1",
"3jmozvTk1W",
"3WWTszwySi",
"3RwGP7h8Vu",
"1JBXQXiYlU"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment"
],
"note_created": [
1732772000738,
1730712940642,
1732566080732,
1730742457340,
1730872939436,
1731864212370,
1737523661080,
1731865002803,
1733161983482,
1732956695102,
1732149486901,
1733065897880,
1729543151624,
1733159281946,
1731865538181,
1734813341925,
1731864541462,
1731863785029
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission4771/Reviewer_qn1H"
],
[
"ICLR.cc/2025/Conference/Submission4771/Reviewer_QcAH"
],
[
"ICLR.cc/2025/Conference/Submission4771/Reviewer_8Khr"
],
[
"ICLR.cc/2025/Conference/Submission4771/Reviewer_8Khr"
],
[
"ICLR.cc/2025/Conference/Submission4771/Reviewer_eFRA"
],
[
"ICLR.cc/2025/Conference/Submission4771/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission4771/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4771/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4771/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4771/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4771/Reviewer_QcAH"
],
[
"ICLR.cc/2025/Conference/Submission4771/Reviewer_qn1H"
],
[
"ICLR.cc/2025/Conference/Submission4771/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4771/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4771/Area_Chair_5bmu"
],
[
"ICLR.cc/2025/Conference/Submission4771/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4771/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"I would like to thank the authors for their response. I am increasing my score.\"}",
"{\"summary\": \"This paper proposes a novel method to enhance jailbreaking attacks on safety-aligned large language models (LLMs). The proposed method involves constructing a framework that fine-tunes an LLM from a base model by encouraging it to generate human-readable adversarial suffixes for harmful requests. Extensive experimental results demonstrate that the AdvPrompter can produce low-perplexity adversarial suffixes and achieve performance comparable to two baseline methods, i.e., GCG and AutoDAN.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed method enables the fast generation of specific adversarial suffixes for individual harmful requests.\", \"The experimental results show that the proposed method achieves great performance.\"], \"weaknesses\": [\"Overall, I think the method is sound, but there are a few concerns.\", \"The advantages of AdvPrompter mentioned in lines 108-136 should be specified under certain comparative conditions. For example, the \\\"adaptivity to input\\\" should be highlighted in the context of generating at a low time cost, as both GCG-individual and AutoDAN-individual are also adaptive to input. The \\\"fast generation\\\" compared to GCG and AutoDAN should be specified in the context of generating individual adversarial suffixes, since GCG-universal and AutoDAN-universal are ready to be used once obtained.\", \"Since both AutoDAN-universal and AdvPrompter generate human-readable adversarial suffixes quickly, it would be beneficial to discuss their performance in more detail, especially in Table 3.\", \"I think the \\\"Gradient-free TargetLLM\\\" is not a significant advantage, and it is unnecessary to emphasize the \\\"gray-box TargetLLM\\\" since it is actually a \\\"white-box\\\" model.\", \"More comparisons to existing methods, such as TAP and PAP, should be included. These methods also generate human-readable adversarial prompts.\"], \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"re rebuttal\", \"comment\": \"The authors failed to properly address so many of my comments, especially the last one, saying \\\"reaching SOTA ASR is not our main focus\\\" cannot dodge the question raised. I will keep my score.\"}",
"{\"summary\": \"The paper introduces a new method, and potentially a new perspective, for jailbreak prompting, the main contribution is that the prompter is itself a trained model, thus it can generate the jailbreak prompts extremely efficiently. The authors also mentioned several other properties such as human-readable or gradient-free, but these properties have been well discussed before.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The idea of using a pretrained model to directly generate jailbreak as next token prediction is interesting.\\n\\n2. The propose method can generate jailbreak much faster than existing methods, especially since most existing methods are optimizing for every sample.\", \"weaknesses\": \"1. the idea of training a generation model for jailbreak has many natural limitations, such as the evaluation will require a different split of data, need to validating that the model can generalize across different target LLMs, and also different benchmark datasets. The authors didn't fully address these.\\n\\n -. 1.1 For example, since the method does not require any gradient of the target LLMs, the authors need to report more results on the commerlized LLMs where the strengths might be more obvious (referring to Table 2). \\n\\n -. 1.2. The evaluation is only limited to AdvBench, the empirical scope is too small. Some other choices are like HarmBench or JAMBench. \\n\\n -. 1.3. The evaluation needs to demonstrate the power of the advPrompter while the model is trained on AdvBench, and tested on other benchmarks such as HarmBench or JamBench. This is very important to show the advantages of proposed methods over the per-sample optimization method. \\n\\n -. 1.4. Similarly, the authors might want to offer more detailed and comprehensive discussions on the differences between the targetLLM during training vs. during testing, although a gentle discussion has been offered in 4.2. \\n\\n2. The empirical scope is also fairly limited in terms of the methods being compared. Newer methods in jailbreaks, even just the ones published and presented in recent conferences (excluding arXiv ones), need to be discussed and compared. There are more methods that can deliver human-readable jailbreaks. (Although another method that can simultaneously fulfill all the properties in Table 1 might not exist). \\n\\n3. While the authors present a unique method, and might be the only one at this moment can achieve all the properties in Table 1, the performance is unfortunately achieved by trade-offs. For example, in Table 2, the proposed method is not necessarily always the best performing method in ASR. The perplexity is always the lowest, but comparison to newer method might be needed, e.g., [1]. This is important because in AI security research, ASR and perplexity are probably more important factors than generation time. Authors might need to offer more convincing discussions why the method is favored although being lower in ASR in certain cases. \\n\\n\\n[1]. Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of LLMs\", \"questions\": \"1. Does the method require gradient during training? (i.e., does the method have to be trained with white-box LLM?) if that's the case, then that is another point needs to be made clear. If not, results showing how the trained advprompter from highly aligned models such as GPT then applied to less aligned models will be interesting.\\n\\n2. Training time and requirement of computing might also need to be discussed, although less important. \\n\\n3. The authors might need to compare their results in Sec. 4.3 with other jailbreak defensive methods for LLMs.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"The paper is written in a technical way. Personally, I don't think the paper has ethical issues. However, the paper itself is about jailbreaking LLMs, a fairly sensitive topic, might benefit from an additional layer of caution.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper studies how efficiently generate transferrable and interpretable surfix for jailbreak. Unlike previous white-box attack methods that adopt search-based optimization, the authors propose a learning-based method, i.e. finetuning a LLM to generate adversarial prompts using annotated harmful QA data. A major benefit of this approach is inference-time efficiency. To train the LLM, the authors propose an alternated optimization by first searching for the best surfix that prompts the target LLM to answer harmful queries, and subsequently use it to finetune the Prompter LLM. Experiments are mainly conducted by comparing with some white-box attacks, and both direct search and transfer settings are considered. The result show that a major inference efficiency boost, with mixed results in terms of ASR. The proposed method also exhibits stronger transferability to close-source proprietary models than baselines.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Exploring the leanring-based paradigm for generating adversarial prompts is novel and relevant, to the best of the reviewer\\u2019s knowledge.\\n2. Compared with search-based paradigm, methods along the learning-based paradigm, like this one, naturally enjoy the benefit of inference-time efficiency.\\n3. Experiment results suggest that the proposed method surpasses previous method in terms of transferrability to black-box models, which is arguably a more practical scenario than white-box attack.\", \"weaknesses\": \"1. [major] It seems that, intuitively, the solution of equation 1 depends on the targetLLM. i.e. the optimal surfix that triggers a LLM to output the target response (e.g. \\u201cSure, here are detailed xxx\\u201d) might be different. I\\u2019d imagine that this would hurt the transferrability in theory. It does appear that the transferability of AdvPrompter is at least better than early white-box attackers, but it might be due to poor transferability of white-box attackers in the first place.\\n2. [major] Following 1, I have some doubt about the practicality of jailbreak methods that requires transferrability in general. I would suggest comparing with SOTA blackbox methods on Figure 2. While I acknowledge that it is debatable whether such comparison is academically fair, the general practice usually guides us towards using whichever that is most effective. But I am happy to hear the author\\u2019s rebuttal and take them into consideration.\\n3. [minor] Learning-based paradigm, compared with search-based ones, incurs high training cost.\", \"questions\": \"1. Necessity of alternated update: Is it necessary to put the AdvPrompter in the loop of suffix generation (Figure 1 bottom right)? My understanding is that, the purpose of including it is to generate topk candidate tokens for the suffix. I am curious whether the author has tried to use a separate LLM to do this? The upside is that the dataset used to finetune the AdvPrompter can generated offline (without alternatively updating AdvPrompter); The downside is that the generated most likely tokens will not be adaptive.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Rebuttal (1/2)\", \"comment\": \"Thank you for reviewing our paper and giving constructive feedback.\\n\\n**Response to W1:**\\nWe highly disagree with the criticism brought up here. We already address in the paper that the train/test split matters a lot, which is why we also report results on the HarmBench dataset in Table 3 and Table 5 which has minimal semantic overlap between data-splits. We also specifically test the generalizability of the AdvPrompter to different TargetLLMs in section 4.2.\\n\\n**Response to W1.1:**\\nThe method still requires access to output token probabilities, therefore when attacking commercial blackbox LLMs, it can only be used in a transfer setting. This scenario is explored in Section 4.2 and Fig. 2.\\n\\n**Response to W1.2:**\\nThis is incorrect, **we report results on HarmBench** in Table 3 and Table 5.\\n\\n**Response to W1.3:**\\nIn the results on HarmBench in Table 3 we aim to test exactly this transferability between datasets. HarmBench is specifically designed to have **minimal semantic overlap** between instances (between test and validation splits), therefore by training on one split (we use validation) and testing on another split we examine the transferability between datasets. And we observe that AdvPrompter preserves high transferability in this setup as well! Note that training on AdvBench and testing on HarmBench would be **worse than our setup**, because there is a significant semantic overlap between AdvBench and HarmBench.\\n\\n**Response to W1.4:**\\nWe agree that the discussion can be extended here. During training, we attack with AdvPrompterOpt the Vicuna-13b TargetLLM, exploiting the white-box nature of this model by using the output token probabilities to evaluate candidate tokens (no gradients of TargetLLM are involved). After training the AdvPrompter on the training set, we auto-regressively generate multiple responses of the AdvPrompter on the test set. The instructions and corresponding responses are then tested against the blackbox TargetLLM using an API. We are happy to include this extended discussion in a revised version of the manuscript.\\n\\n\\n**Response to W2:**\\nMissing comparisons to newer attacks has been a shared criticism between reviewers, therefore we now additionally report comparisons to recently published methods BEAST, TAP, PAP on a variety of settings. In the results, summarized in the tables in the responses to reviewers qn1H and QcAH, we observe very strong performance of our method even in comparison to SOTA blackbox attacks. Note that Table 3 in the paper also reports the results for PAIR.\\n\\n**Response to W3:**\\nFirstly, in table 2, for Mistral-7b and Vicuna-13b, the results are quite similar to previous methods, and the slightly reduced ASR in some settings can be compensated for by the lower perplexity (the trade-off is controlled by a hyperparameter). These two models are also relatively easy to jailbreak, and on the more difficult Llama2-7b we achieve a much larger ASR while having the lowest perplexity.\\nWe also observe strong results in terms of ASR and perplexity in comparison to newer blackbox-attacks, see Figure 2 in the paper and the new results in the response to reviewer QcAH.\\n\\nSecondly, from an attacker perspective, ASR and perplexity are indeed the most relevant metrics. However, from the perspective of the designer of new LLMs, in a quickly shifting landscape of LLM capabilities, it is also important to quickly generate large amounts of safety fine-tuning data to account for edge cases of vulnerabilities. Our method takes a first step in exactly this direction: Scalable safety-fine-tuning data generation while re-using previously invested compute.\\n\\nLastly, even though we achieve good ASR across different settings, reaching SOTA ASR is not our main focus, instead we offer a new learning-based approach that has not been explored in this context in previous work. Even if some newer methods exist that can outperform the token-by-token-based optimizer AdvPrompterOpt in terms of ASR, we firmly believe that our general AdvPrompterTrain learning-based paradigm is still highly relevant as it can easily be extended by improving the AdvPrompterOpt with newer mutation-based techniques.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thank you for reviewing our paper and giving constructive feedback.\\n\\n**Response to W1:**\\nThank you for pointing this out, we will clarify these conditions in the revision.\\n\\n**Response to W2:**\\nIndeed, in table 3, PAIR, AutoDAN and AdvPrompter all generate human-readable suffixes, only GCG produces high-perplexity suffixes. Note that in this setting we consider AutoDAN-individual and not AutoDAN-universal. We observe that out of the former three methods, AdvPrompter achieves the highest ASR on Mistral-7b and Llama-3.1-8b (the most challenging out of the three models), and only performs slightly worse than PAIR on Vicuna-7b. This is achieved with a significantly lower inference time than PAIR and AutoDAN-individual, which run optimization directly on the test-set instances.\\n\\n**Response to W3:**\\nYou are correct that we can only use white-box models with this method. However, we still think it is important to highlight that we do not need to take gradients through the model, which significantly reduces the memory footprint. We will also clarify this in the revision.\\n\\n**Response to W4:**\\nThank you for this suggestion, please see the table below. It reports the results on the HarmBench test set across whitebox and blackbox models. The numbers for TAP, TAP-T and PAP-top5 are taken from the HarmBench paper, according to which TAP shows SOTA results on various GPT models. We observe that AdvPrompter performs competitively with even the best blackbox attack methods against GPT models on the HarmBench dataset.\\n\\n| TargetLLM | TAP (ASR) | TAP-T (ASR) | PAP-top5 (ASR) | AdvPrompter (ASR@1/@10) |\\n|---------------|-----------|-------------|----------------|-------------------------|\\n| Vicuna-7B | 51.7 | 60.2 | 19.2 | 42.8/68.1 |\\n| Mistral-7B | 62.8 | 65.8 | 26.6 | 54.2/77.8 |\\n| GPT-3.5-Turbo | 38.9 | 47.6 | 17.0 | 49.4/79.1 |\\n| GPT-4-0613 | 43.3 | 55.8 | 11.6 | 29.2/58.9 |\\n\\nAgainst Vicuna-7b and Mistral-7b, we evaluate AdvPrompter in the whitebox attack setting.\\nAgainst the GPT models, we evaluate AdvPrompter in the blackbox transfer-attack setting. As the whitebox TargetLLM for training the AdvPrompter we use Llama3.1-8b in the case of GPT-3.5-Turbo, and Vicuna-7b in the case of GPT-4-0613.\\nAdditionally, in our response to reviewer qn1H, we report the results of the recently published method BEAST. Table 3 in the paper additionally reports results for PAIR.\"}",
"{\"comment\": \"Dear Reviewer qn1H,\\n\\nWe are grateful for your kind support and assistance in improving the paper.\"}",
"{\"comment\": \"We regret to read that the reviewer is not fully satisfied with our responses in the rebuttal, but we hope to address some of the reviewer\\u2019s concerns in the remaining discussion time.\\n\\n> The authors failed to properly address so many of my comments\\n\\nWe sincerely think that we have addressed the mentioned weaknesses, especially by adding in the new experimental comparisons to other attack methods (TAP, PAP, BEAST), as well as clarifying the misunderstandings regarding the HarmBench dataset. Our paper is a sound and valid scientific exploration of the ideas and offers new insights in the space of sharing information between adversarial LLM attacks. Can you please expand more concretely on any specific outstanding concerns you have on our paper?\\n\\n> saying \\\"reaching SOTA ASR is not our main focus\\\" cannot dodge the question raised\\n\\nWe do not believe that we dodged your question here. In your last mentioned weakness you requested the following:\\n\\n> comparison to newer method might be needed, e.g., [1]\\n\\n> Authors might need to offer more convincing discussions why the method is favored although being lower in ASR in certain cases.\\n\\nOur answer was a three-fold discussion on this topic. First we pointed out the additional comparisons to newer methods (TAP, PAP, BEAST) listed in the responses to reviewers QcAH and qn1H, stressing that the ASR of our method is mostly competitive with other methods, although it is true that it is not always at the very top in terms of ASR. Then we explained why generation speed can be equally important as ASR and perplexity, from the perspective of safety-finetuning. Finally we highlighted that introducing a new learning-based paradigm to adversarial attacks on LLMs is an important contribution in and of itself, even without always achieving SOTA ASR, as the established paradigm extends beyond the results presented in our paper, e.g. offering room for potential improvements upon the inner optimization loop that may strongly increase ASR in future extensions of this work. \\nWe believe that our answer thoroughly addresses your comment, if you still disagree could you please clarify what is missing?\"}",
"{\"title\": \"Checking in after Rebuttal\", \"comment\": \"Dear Reviewers,\\n\\nWe hope our rebuttal has addressed your concerns and enhanced the quality of our paper. Your feedback has been crucial in improving our work. If the changes meet the paper's objectives and your concerns, we hope this could be reflected in an improved score.\\n\\nPlease let us know if you have further questions or need additional information to aid your review. Thank you!\"}",
"{\"comment\": \"Thanks for the reply. I have no further concerns.\"}",
"{\"summary\": \"The paper titled \\\"AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs\\\" presents a novel approach to generating adversarial prompts that jailbreak large language models (LLMs), enabling the generation of harmful or inappropriate content. Traditional methods for adversarial prompt generation, such as manual red-teaming or optimization-based methods, can be slow, inefficient, and prone to generating semantically meaningless attacks. In contrast, the authors propose AdvPrompter, an LLM trained using a novel algorithm to rapidly generate human-readable adversarial prompts without requiring gradient information from the target LLM.\\n\\nThe core innovation of the paper lies in its alternating training method, AdvPrompterTrain, which alternates between generating adversarial suffixes and fine-tuning the AdvPrompter model. The resulting adversarial prompts are highly effective, achieving state-of-the-art results on the AdvBench and HarmBench datasets, with improved attack success rates, faster generation times, and strong transferability to black-box LLMs. Additionally, the paper demonstrates that by fine-tuning LLMs on datasets generated by AdvPrompter, models can become more robust against jailbreaking attacks while maintaining high performance on benchmarks like MMLU and MT-bench.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. **Clarity**: The paper is well-structured and provides clear explanations of its methodology, backed by comprehensive experimental results and ablation studies. The use of figures and tables, such as Table 1, aids in understanding the comparative advantages of the method. The training algorithm and attack framework are concisely explained, which improves accessibility.\\n\\n2. **Significance**: AdvPrompter presents significant contributions to the area of LLM robustness and safety, offering a highly scalable solution to automatic red-teaming. Its gradient-free approach makes it applicable in both white-box and black-box settings, which broadens its impact for securing deployed LLM systems. The paper\\u2019s findings on fine-tuning models with adversarial data to improve safety also open up new avenues for automated adversarial training in LLMs.\", \"weaknesses\": \"1. **Lack of Comparison with BEAST**: One notable omission is the lack of detailed comparison with BEAST (introduced in \\\"Fast Adversarial Attacks on Language Models in One GPU Minute\\\")\\u200b. BEAST also focuses on gradient-free attacks and is highly efficient, achieving impressive success rates within one GPU minute, making it an essential baseline. The authors of AdvPrompter reference BEAST, but they fail to provide head-to-head benchmarks, especially in terms of speed and success rates. This limits the ability to assess whether AdvPrompter's claim of \\\"fast\\\" generation holds up against a method already proven to be both rapid and effective.\\n\\n2. \\\"Unclear Computational Efficiency\\\": While AdvPrompter claims faster generation of adversarial prompts compared to gradient-based methods, the paper does not include detailed benchmarks or profiling to demonstrate computational efficiency on a per-prompt basis. For instance, BEAST reports precise GPU utilization metrics and compares the attack time per prompt across different models. AdvPrompter lacks such concrete data, making its claims of speed improvements less convincing. Without these comparisons, it is unclear whether the method is truly fast or simply optimized for a limited set of tasks.\", \"incomplete_discussion_of_attack_readability\": \"While AdvPrompter claims to generate human-readable adversarial prompts, there is limited qualitative analysis of the readability or coherence of these prompts. I would like to see more analysis on this.\", \"questions\": \"I encourage the authors to address the points raised in the weaknesses section and to conduct additional experiments where further investigation is required.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer QcAH,\\n\\nWe sincerely appreciate your valuable suggestions and thanks for helping us improve the paper! If you don't have any further concerns, we would deeply appreciate it if you could consider raising your score. If not, please let us know your further concerns, and we will continue actively responding to your comments.\\n\\nBest,\\nAuthors\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thank you for reviewing our paper and giving constructive feedback.\\n\\n**Response to W1 (Lack of comparison with BEAST):**\\nThanks for pointing out the missing relevant comparison. See the table below (results on AdvBench test set).\\n\\n| TargetLLM | BEAST (ASR) | AdvPrompter (ASR@1/@10) |\\n|------------|-------------|-------------------------|\\n| Vicuna-7B | 96 | 35.6/85.6 |\\n| Vicuna-13B | 93 | 23.1/74.7 |\\n| Mistral-7B | 87 | 58.7/95.9 |\\n| Falcon-7B | 100 | 79.1/98.3 |\\n| Llama-2-7B | 12 | 12.5/46.1 |\\n\\nSince BEAST uses the same dataset (AdvBench) for evaluation, we took the highest reported ASR presented in the original paper. With AdvPrompter, generating 10 (or even many more suffixes) is not expensive, therefore the main number for AdvPrompter is ASR@10. As you can see from the table, AdvPrompter shows competitive performance with BEAST. While AdvPrompter performs worse on some models, note the large improvement on the most difficult model Llama-2-7B.\\n\\n**Response to W2 (Unclear computational efficiency):**\\nThank you for pointing this out. We believe that there might be a slight misunderstanding here: AdvPrompter, once trained, generates suffixes simply by auto-regressive generation within seconds; there is no optimization involved, therefore generating orders of magnitude faster than BEAST. Of course, during training the computational cost is quite high and we have been very transparent about this throughout the paper (see e.g. sections 4.1 and B.5). The biggest computational burden of training the AdvPrompter is in the repeatedly used AdvPrompterOpt step, which by itself takes on average 2.7 minutes per prompt for <8GB LLMs on a single A100 GPU. This is very similar to the numbers reported in BEAST.\\n\\n**Response to W3 (Discussion of Attack Readability):**\\nOne possible way for improving qualitative analysis is to do a human-study on this matter, but we believe that this is unnecessary because:\\n- We include a variety of non-cherry-picked adversarial prompts in the paper (see Appendix E). \\n- We report the perplexity (for all white-box experiments) which is tightly correlated with human-readability. The perplexity is consistently low in all experiments we tried.\\n- Human-readability in our method is achieved by construction (not as a side-product) as we describe in section 3.3. \\n\\nWe believe that this sufficiently demonstrates our claim and is in accordance with other related work (e.g. AutoDAN, PAIR, etc.). Additionally, the replicable implementation (which will be open sourced with the paper) dumps all prompts and users can easily inspect and verify this claim.\"}",
"{\"metareview\": \"This paper introduces a novel method to enhance jailbreaking attacks on safety-aligned large language models (LLMs). The proposed approach involves developing a framework to fine-tune an LLM from a base model, encouraging it to generate human-readable adversarial suffixes for harmful requests. Extensive experimental results demonstrate that the method, named AdvPrompter, produces low-perplexity adversarial suffixes and achieves performance comparable to two baseline methods: GCG and AutoDAN.\\n\\nThe paper\\u2019s primary innovation lies in its alternating training method, AdvPrompterTrain, which alternates between generating adversarial suffixes and fine-tuning the AdvPrompter model. This process results in highly effective adversarial prompts, achieving state-of-the-art performance on the AdvBench and HarmBench datasets. AdvPrompter demonstrates improved attack success rates, faster generation times, and strong transferability to black-box LLMs. Furthermore, the paper shows that fine-tuning LLMs on datasets generated by AdvPrompter enhances their robustness against jailbreaking attacks while maintaining high performance on benchmarks like MMLU and MT-Bench. \\n\\nAdvPrompter makes notable contributions to the field of LLM robustness and safety, offering a scalable and gradient-free solution to automatic red-teaming in both white-box and black-box settings, which broadens its applicability for securing deployed LLM systems. \\n\\nHowever, there are significant shortcomings. One major omission is the lack of a detailed comparison with BEAST, a method introduced in *\\\"Fast Adversarial Attacks on Language Models in One GPU Minute.\\\"* BEAST also employs a gradient-free approach, achieving high success rates with exceptional speed. Although the authors reference BEAST, they fail to provide head-to-head benchmarks, particularly regarding speed and success rates. This omission makes it difficult to verify AdvPrompter's claims of \\\"fast\\\" generation relative to an already proven rapid method. \\n\\nAdditionally, the empirical scope of the comparisons is limited. The authors do not sufficiently engage with newer jailbreak methods, including those presented at recent conferences. Many of these methods deliver human-readable jailbreaks and warrant a more thorough discussion and comparison. Reviewers also suggested comparing AdvPrompter's results with existing defensive techniques for LLMs, a point that remains underexplored. \\n\\nWhile the authors addressed some concerns raised during the review process, several comments remain unresolved. The paper would benefit from an additional round of revision, including more comprehensive evaluations, detailed comparisons, and expanded discussions of related methods.\", \"additional_comments_on_reviewer_discussion\": \"The authors of AdvPrompter reference BEAST but fail to provide direct, head-to-head comparisons, particularly in terms of speed and success rates. This lack of comparison limits the ability to evaluate whether AdvPrompter's claim of \\\"fast\\\" generation is valid when compared to a method that has already demonstrated both speed and effectiveness. The empirical scope of the comparisons is also quite narrow, as newer methods in jailbreak attacks\\u2014specifically those presented at recent conferences (excluding arXiv papers)\\u2014are not discussed or compared. Many of these newer methods also produce human-readable jailbreaks and warrant inclusion in a broader comparison. Additionally, as suggested by the reviewer, a comparison of AdvPrompter's results with existing defensive methods for LLMs is needed.\\n\\nIn response, the authors agree that the discussion could be expanded. They explain that during training, AdvPrompterOpt attacks the Vicuna-13b TargetLLM, exploiting the model\\u2019s white-box nature by using output token probabilities to evaluate candidate tokens, without involving gradients from the TargetLLM. After training the AdvPrompter on the training set, they generate multiple responses from it on the test set. The instructions and responses are then tested against a black-box TargetLLM via an API. The authors are open to including this extended discussion in a revised version of the manuscript.\\n\\nDespite presenting a unique method that might be the only one currently achieving all the properties in Table 1, the performance is clearly achieved at the cost of certain trade-offs. The authors have not sufficiently addressed many reviewer comments, particularly the one regarding their claim that \\\"reaching SOTA ASR is not our main focus.\\\" This response does not adequately address the concerns raised, leading reviewers to maintain their rating.\"}",
"{\"title\": \"Rebuttal (2/2)\", \"comment\": \"**Response to Q1:**\\nThe method does not require gradients, but still requires the output token probabilities that are not always available for black-box LLMs, as described in line 133-136.\\n\\n**Response to Q2:**\\nTraining time and compute requirements are already discussed in Appendix C.1, Table 3 and in Section 4. Using the hyperparameters specified in Appendix C, the AdvPrompterTrain process averages 16 hours and 12 minutes for 7B TargetLLMs, and 20 hours and 4 minutes for 13B TargetLLMs, when run on 2 NVIDIA A100 GPUs for training 10 epochs.\\n\\n**Response to Q3:**\\nWhile this would indeed be a nice addition, our goal there was to show an initial validation that AdvPrompter could be useful for improving LLM robustness. This was indeed the case when we tested it across different attack methods (e.g. ours, AutoDAN, GCG). We believe that more comprehensive evaluation of different alignment methods (e.g. DPO, SFT) and comparison with other existing defense mechanisms is beyond the scope of this paper.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thank you for reviewing our paper and giving constructive feedback.\\n\\n**Response to W1:**\\nIt is true that one could expect that the adversarial suffix is highly specific to the TargetLLM. However, in practice we observe that attacks are transferable to a surprisingly high degree. One hypothesis is that while the specific architecture and training procedures for various LLMs might differ, the data used for training (and specifically safety fine-tuning) is often very similar, which leads to shared vulnerabilities.\\nAdditionally, we believe that there is a distinction to be made between high- and low-perplexity adversarial suffixes. In the former case the suffix heavily exploits out-of-distribution non-human-readable text to achieve the jailbreaking behavior, in the latter the jailbreaking behavior is achieved with higher-level strategies, e.g. by convincing the TargetLLM of the non-harmfulness of the instruction using natural language. Naturally the latter approach appears much more independent of the choice of attacked TargetLLM, and this is potentially why we observe an improved transferability for our attack.\\n\\n**Response to W2:**\\nFirst, we believe that transfer-based jailbreak attacks are not automatically un-practical. For example, note that jailbreak attacks are also useful for automated red-teaming, so transferability is not a necessity for an attack to be useful in practice. \\nMoreover, in principle, human-written jailbreaking prompts transfer very well between models because they depend on high-level strategies that generalize across different LLMs. While we don\\u2019t claim that AdvPrompter has achieved human-level performance, we have seen some examples of high-level strategies emerging (e.g. Shakespearian, virtualizing it in a game environment, etc.) so it is reasonable that AdvPromtper transfers well. \\nTo quantitatively show the competitiveness of the transfer-attack with direct blackbox attacks, we additionally report a comparison against the blackbox methods TAP and PAP, please see our response to reviewer QcAH. According to the HarmBench paper, TAP shows SOTA performance on black-box attacks against some GPT models (see Table 7 therein). In our results, we observe that AdvPrompter performs very well across settings, even outperforming the blackbox methods on both tested GPT models.\\n\\n**Response to W3:**\\nThis is true and it is a natural drawback of the learning-based paradigm, which trades training cost for reduced inference time and potentially improved ASR by using shared information between instructions. We believe that this is a reasonable trade-off to be made.\\nSomething that we have not highlighted much in the paper but might be relevant here is that our proposed method actually allows for high flexibility in this trade-off, allowing for three possible settings: \\n1) Train the AdvPrompter as described in the paper and use fast auto-regressive generation at inference (this is our focus in the paper).\\n2) Completely ignore training and apply AdvPrompterOpt as a pure search-based method, reducing to a beam-search based variant of AutoDAN, similar to the recently proposed BEAST method.\\n3) Combine both, pre-training on the available data and then running AdvPrompterOpt on top at inference time.\\n\\nThis flexibility allows the user to customize the method to a variety of setups.\\n\\n**Response to Q1:**\\nIf I understand correctly, what you are describing is the following two-stage approach: First create an offline dataset using the non-trained AdvPrompterOpt (meaning AdvPrompterOpt with a fixed non-trained AdvPrompter), then train the AdvPrompter on the offline dataset. This is indeed a valid option that we have also experimented with in the earlier stages of the project.\\nThis works when the attacked TargetLLM is not very safe, i.e. when the non-trained AdvPrompterOpt out-of-the-box generates jailbreaking suffixes. However, when attacking heavily safety-finetuned TargetLLMs (e.g. LLama2) AdvPrompterOpt (and also other attacks) often do not succeed in successfully jailbreaking suffixes on a large subset of the instructions. As a consequence, the resulting offline dataset is not usually sufficient to train a well-performing AdvPrompter.\\nThe natural extension of this approach is to iterate between dataset generation and training the AdvPrompter, where the dataset at each iteration gets progressively better as the AdvPrompter proposes better candidates for AdvPrompterOpt.\\nWe decided to directly take this one step further by continuously generating data for a replay buffer while updating the AdvPrompter on a running basis.\\n\\nOf course, in practice one should make use of any available data, which likely involves pre-training the AdvPrompter on an available offline dataset of suffixes to warm-start it. In the spirit of the analogy above, our training method could then be seen as an online algorithm that can be employed to further improve performance after pre-training on the available offline data.\"}"
]
} |
E8gYIrbP00 | Beyond correlation: The impact of human uncertainty in measuring the effectiveness of automatic evaluation and LLM-as-a-judge | [
"Aparna Elangovan",
"Lei Xu",
"Jongwoo Ko",
"Mahsa Elyasi",
"Ling Liu",
"Sravan Babu Bodapati",
"Dan Roth"
] | The effectiveness of automatic evaluation of generative models is typically measured by comparing the labels generated via automation with human labels using correlation metrics.
However, metrics like Krippendorff's $\alpha$ and Randolph's $\kappa$ were originally designed to measure the reliability of human labeling, thus make assumptions about typical human labeling behavior, and these assumptions may not be applicable to machine generated labels.
In this paper, we show how *relying on a single aggregate correlation score* can obscure fundamental differences between human labels and those from automatic evaluation, including LLM-as-a-Judge.
Specifically, we demonstrate that when the proportion of samples with variation or uncertainty in human assigned labels is relatively high, machine labels (generated by automatic evaluation methods) may superficially appear to have similar or better correlation with the human majority label compared to the human-to-human (HH) correlation.
This can create the illusion that labels from automatic evaluation approximates the human majority label.
However, as the proportion of samples with consistent human labels increases, the correlation between machine and human labels fall well below HH correlation.
Based on these findings, we first propose *stratifying data by human label uncertainty* to provide a more robust analysis of automatic evaluation performance. Second, recognizing that uncertainty and variation are inherent in perception-based human evaluations, such as those involving attitudes or preferences, we introduce a new metric -*binned Jensen-Shannon Divergence for perception* for such scenarios to better measure the effectiveness of automatic evaluations. Third, we present visualization techniques -- *perception charts*, to contextualize correlation measures appropriately and to show the strengths and limitations of automatic evaluation. We have open-sourced our analysis and visualization tools at https://github.com/amazon-science/BeyondCorrelation. | [
"Automated evaluation",
"LLM as a judge",
"correlation measures"
] | Accept (Poster) | https://openreview.net/pdf?id=E8gYIrbP00 | https://openreview.net/forum?id=E8gYIrbP00 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yxJczvaoDU",
"wv3rY5H2ND",
"uzgV0gH9Ao",
"u31ztpJHly",
"sOGhcwkiGw",
"qjAuKgVL8R",
"oijoi4xWr5",
"oCb2OPifRn",
"njuHyIueK4",
"n9lSc4UFu2",
"mGsDL65VtI",
"lcpTqJQzFA",
"kuYEGaWhF4",
"kjbSMnVcm8",
"jiDnugiIRx",
"jcPgQMGzZJ",
"gVHhVw998y",
"drbt0hmCtp",
"bYqAamsFgj",
"bRnCbAoDzG",
"VW9O8PYAAj",
"VD3bgdBGd0",
"SihE60NIQC",
"N7hdIIEXDk",
"HXV5JA0pS4",
"FpdLBavuLZ",
"Fo6QdOYuSm",
"Da30Cjn2s0",
"AgjiCV2vZ7",
"AayZw0xTCS",
"6x6QU4bhZT",
"5eT53UagK7",
"32dVDHW4Kk",
"14DrS4vIs5"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1732295213409,
1732166634394,
1732166183371,
1732287038552,
1732567232136,
1730580575413,
1732634293590,
1732296579390,
1732287016404,
1732611050726,
1732045570508,
1732027374867,
1733196059771,
1732285037630,
1732295469750,
1732296870192,
1732123605838,
1732285400199,
1732295885601,
1732118716684,
1734686072607,
1729978617694,
1732319307526,
1732554309403,
1737523424985,
1732118245442,
1732285541915,
1732165781507,
1732166292152,
1730676609934,
1732028619411,
1732734735403,
1730684885015,
1732685016383
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Reviewer_b4vj"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Reviewer_RFAo"
],
[
"ICLR.cc/2025/Conference/Submission956/Reviewer_Y99p"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Reviewer_Y99p"
],
[
"ICLR.cc/2025/Conference/Submission956/Area_Chair_5hZH"
],
[
"ICLR.cc/2025/Conference/Submission956/Reviewer_RFAo"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Reviewer_ApGo"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Reviewer_ApGo"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
],
[
"ICLR.cc/2025/Conference/Submission956/Reviewer_ApGo"
],
[
"ICLR.cc/2025/Conference/Submission956/Reviewer_Y99p"
],
[
"ICLR.cc/2025/Conference/Submission956/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Reviewer 4 - W1\", \"comment\": \"1. The choice to stratify results based on \\u201chigh\\u201d and \\u201clow\\u201d human uncertainty needs clearer justification. A discussion or empirical test on how these thresholds were set would make the stratification process more robust and reproducible.\\n\\nThank you for your comments to help improve the clarification. We had used \\u201chigh\\u201d and \\u201clow\\u201d certainty in the context of the random labeller as a shorthand to indicate relatively high proportions of samples with uncertainty, as we have figures that were plotted against varying proportions samples with uncertain labels.\\n\\n In hindsight, we now understand the source of confusion. We have updated the content as follows under figure 2 as follows \\n*Simulating the impact of uncertainty by comparing with an automatic random labeler **(R)**. When the proportion of samples with uncertainty is higher, even a random labeler can \\\\textit{appear} to have better correlation with a majority ($\\\\text{H}^w$) or median {$\\\\overline{H}$} human label\\u201d*\\n\\nWe also updated the content under RQ1 to clarify high and low better (referring to the figures where until a cut off is reached the random labeller appears better) - *In this scenario, even a random labeler can appear better when the proportion of samples with uncertainty is higher. It is only when the proportion of samples with consistent labels increase, does the relative poor performance of the random labeler come to light, shown in Fig.2. Intuitively, if 2 humans disagree, a 3rd random labeler cannot do any worse in-terms of agreement. The random labeler can only disagree (in which case they are no better than the 2 humans) or agree with any chosen label. \\\\textbf{This example further illustrates why stratification by proportion of uncertainty is crucial to uncovering weaknesses in \\\\textit{any automatic labeler}* \\n\\n \\n\\nIn terms of deciding the stratification thresholds for the datasets, here are the details. \\n\\n \\n\\nIn Table 1, the MNLI and SNLI datasets have exactly 5 human labels per item. Hence, you have the following scenarios \\u2013 5/5 = 1.0, 4/5 = 0.8, 3/5=0.6. Hence, we have stratified by all possible votes for the majority label. In Table 2, for SummEval, there are exactly 3 human labels. Hence, you have 3/3=1.0, 2/3=0.67, 1/3 =0.33, for median label. In Table 2, we only showed the max agreement and least agreement for brevity, we have now included all the stratification in Appendix In Table 5 and explained the stratification. The results do not change and remain the same. \\n\\n \\n\\nFor Table 3, Each item has 3-5 human label (varying number of human labels), where majority of the items have closer to 3 labels. The main challenge with the MTBench dataset displayed in table 2, is the sample size is quite small for each model pair with multiple annotations. hence some partitions have less than 5 samples or even 0, unlike the MNLI or SNLI datasets which have a few thousand samples, Hence, we have reported the partitions with resonable number of samples, which is 100\\\\%, 60-80. We have now included all the partitions with even 1 sample in Appendix in Table 6. Our findings hold, across all partitions, in table 6 as well .\"}",
"{\"title\": \"Response to reviewer 2 - Q 9\", \"comment\": \"9 . The bins are used to mimic human perception. Beyond the aggregation of perception, can they capture variation in human perception?\\nYes, bins capture variation in human perception. Reviewer 1 has also asked a very similar question. \\n\\nWithin each bin, the JSD captures the difference between the distribution human and machine labels. For example, if the human label distribution spreads over multiple labels in a bin (high variation), while the machine label distribution concentrates on one label, then the JSD metric would capture that. Here is a example,\\n\\n\\n| Item Id | humans | human_median (Bin) | model |\\n|---:|:----------|---------------:|-------------:|\\n| A | [2, 2, 3] | 2 | 3,2 |\\n| B | [1, 2, 2] | 2 | 1,1|\\n| C | [2, 3, 3] | 3 | 2,2 |\\n\\n\\n\\n \\n Values compared in Bin 2 = H(item A + item B), M (item A + item B) \\n = H([2, 2, 3] + [1,2,2]), M([3,2] + [1,1])\\n = H([2,2,3,1,2,2]), M([3,2,1,1])\\n\\n Comparing the probability distribution of values between H and M for Bin 2, assume Llikert scale 1- 3, where the index represents the Likert value and the index value corresponds to the probability of that value:\\n \\n Bin 2 JSD(H, M) = JSD(H[1/6, 4/6, 1/6], M[2/4, 1/4, 1/4])\\n\\n\\nTranslating this to a python library call shown below, would result in score , $JSD_{b2}$ = 0.31\\n\\n\\n from scipy.spatial import distance\\n distance.jensenshannon([1/6, 4/6, 1/6], [2/4, 1/4, 1/4]))\\n\\nSimilarly, for Bin 3\\n\\n Values compared in Bin 3 = H(item C), M( item C) \\n = H([2,3,3), M([2, 2])\\n\\n Bin 3 JSD( H, M) = JSD(H[0, 1/3, 2/3], M[0, 2/2, 0])\\n\\nThis would result in $JSD_{b3}$ = 0.56\\n\\n \\n \\n\\nTotal binned JSD is the weighted sum of number of samples in each bin, where $Bin_2$ contains 2 samples A and B, $Bin_3$ contains 1 sample C\\n \\n\\nBinned JSD = 2/3 * $JSD_{b2}$ + 1/3 * $JSD_{b3}$ = 2/3 * 0.31 + 1/3 * 0.56 = 0.39\\n\\n\\nWe have now included these examples in the Appendix A.7 along with the full code.\"}",
"{\"title\": \"Response to reviewer 2 - Q4-7\", \"comment\": \"4. In Table 2 the terms H^mu M^mu are not specified.\\n\\nThank you for pointing this out. we have now updated it, under Table 2 as follows Spearman's-$\\\\rho$ median $(\\\\overline{H}\\\\overline{M}$) vs. mean ($H^{\\\\mu}M^{\\\\mu}$). \\n\\n5. In Figure 3 are shown the human perception vs the machine labels binned by human median rating. The JS value is reported. The highest values of JS are for \\\\bar{H}=1 and \\\\bar{H}=5. Looking at the histograms, the histograms of human and machine with \\\\bar{H}=3 (and in some measure \\\\bar{H}=4) are very similar, but the JS values are sensibly lower. The authors state that humans tend to be more certain when they assign extreme rating and the machine rarely provide extreme values. Could the explanation of the experiment take into account also this aspect? If there is a different interpretation of this discrepancy ( similar histograms but lower JS value) it would be useful to make this point more evident. \\n\\n \\n\\nBinned-JSD, is based on JSD and therefore is a divergence metric (like distance). Hence, lower scores indicate smaller distance implying more similar distributions. So lower is better, when comparing humans and machines. We have also mentioned this in the paper, under equation 2 in RQ2 - \\u201cSince JSD is a distance-based measure, lower scores are better because they indicate that the human and machine judgments are similar.\\u201d \\n\\n\\n6. Could the author provide some detail in the selection of different thresholds across the tables? It would be useful to clarify whether these thresholds are dataset-specific or if there's a general principle behind the thresholds selection. \\n\\nYes- Thank you for your suggestion to improve the clarity of the paper. We have now addressed this as part of your question 2\\n\\n7. In the paper, different metrics are used and the human and machine labels are compared with average, with median or with majority labels. Are all these comparisons needed? Do they capture multiple aspects of the outputs? \\n\\nWe have used majority label for categorical values where there is no natural order ( such as preferencing model A vs Model B or fact checking ). Median values are typically applied for ordinal values ( statistically speaking), although many research papers have been using mean to aggregate likert values, and we have detailed this in section RQ2 - *\\u201cFurthermore, whether to treat Likert-data as ordinal or interval data dictates the aggregation method \\u2013 median or mean, is also debated (Joshi et al., 2015). The argument against using Likert-data as interval data is that the points on the scale may not be equidistant, e.g., the distance between pair \\u27e8neutral, agree\\u27e9 may not be the same as the distance between \\u27e8agree, strongly agree\\u27e9. We report Spearman\\u2019s-\\u03c1 for both to illustrate the difference in Table 2.\\u201d*\"}",
"{\"title\": \"Response to reviewer 3 -6\", \"comment\": \"6 Confidence intervals and statistical significance:\\n\\nWe absolutely agree this is a critical component that is missing on most studies, including ours. Some of the challenges with estimating variance in correlation problems is briefly described in Deutsch et al., 2021. The key problem with statistical significance is how \\\"random chance agreement\\\" is computed. In our paper, we have explicitly called this out in section 4.2 of our paper as follows *\\\"In addition, statistical analysis, such as null-hypothesis and significance testing, is essential for determining whether one model outperforms another by random chance. Here, the chance component includes 2 aspects, **{(1)}** chance due to the nature of samples in the evaluation set **{(2)}** uncertainty in human labels. A third aspect, even harder, is estimating the error rate as a result of systematically unpredictable erroneous labels from any automated evaluator. \\nFuture studies should explore these problems, including approaches like resampling (Deutsch et al., 2021). Incorporation of chance in rank correlation is also an important aspect to account for when two models differ in rank, but the corresponding difference in absolute scores is negligible, then the difference in the rank may not be meaningful.*\\n\\nWe hope to explore this in future work.\"}",
"{\"title\": \"Reviewer 2\", \"comment\": \"Thanks for responding to our comment. The scores does not seem to have been updated, is there any specific items you would like us to elaborate on. Thank you for taking the time to review our paper and help improve it :-)\"}",
"{\"summary\": \"This paper explores how current methods for evaluating generative models often fall short by relying too heavily on correlation metrics like Krippendorff\\u2019s \\u03b1 and Randolph\\u2019s \\u03ba. These metrics, while common, can mask important nuances in human judgment, especially in cases where human responses vary widely. The authors show that when there\\u2019s a high degree of variation in human evaluations, machine judgments might seem to align well, but as human consensus strengthens, this alignment breaks down, revealing gaps in machine understanding. To address these issues, the paper proposes a more robust evaluation framework that includes stratifying results by the level of human agreement and introducing a new metric, the binned Jensen-Shannon Divergence, to better capture perception-based evaluations. Additionally, the authors suggest using visual tools like perception charts to more clearly illustrate where machine judgments align or diverge from human benchmarks. By combining multiple metrics and visualization methods, this approach aims to provide a more accurate and comprehensive understanding of automated evaluations, especially in areas where human judgments are inherently uncertain.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper makes a valuable contribution by tackling the often-overlooked role of human uncertainty in evaluating generative models. It\\u2019s original in its approach, introducing the binned Jensen-Shannon Divergence metric to better capture the nuances of human perception and using tools like perception charts to bring new depth to evaluations. The quality of the work shows through in its thorough methodology, with experiments across multiple datasets that lend strong support to the findings. The paper is also clear, with a well-structured flow and visuals that help explain complex ideas. Most importantly, the paper has real significance: its framework could reshape how we evaluate generative models, especially in areas where human judgment isn\\u2019t always straightforward. Finally, the paper\\u2019s significance lies in its potential to reshape evaluation practices for generative models, especially in applications where human judgment is inherently subjective, such as content generation, recommendation systems, and interactive AI. By emphasizing the role of human uncertainty and offering practical tools to account for it, this work highlights a crucial aspect often ignored in model evaluation. This framework could lead to more accurate and context-sensitive evaluations, particularly for models that interact with or respond to human preferences.\\nThis paper offers a well-supported framework that deepens our understanding of human-machine evaluations, bridging the gap between traditional metrics and the complexities of human perception. Its contributions could have a lasting impact, inspiring future research and improving evaluation standards across the field of generative modeling.\", \"weaknesses\": \"There are some potential for improvement in paper such as expanding on the technical implementation of the binned Jensen-Shannon Divergence metric would make it more accessible to practitioners, potentially by providing step-by-step instructions or pseudocode. Additionally, testing the framework on a broader range of generative models beyond text (such as image or audio) would demonstrate its versatility. The perception charts are helpful, but they primarily show aggregate trends, which may obscure individual item-level discrepancies; adding item-level visualizations or error bars could improve clarity. To connect more concretely with real-world applications, the paper could benefit from case studies or examples where the framework enhances specific generative tasks, such as in recommender systems. Moreover, a side-by-side comparison with existing metrics like Krippendorff\\u2019s \\u03b1 would better illustrate the added value of the proposed metric. Including confidence intervals or statistical significance testing could also add rigor to the findings. Finally, considering potential biases in human label uncertainty, such as cultural or contextual differences among annotators, would make the framework more robust across diverse datasets. Together, these enhancements would increase the framework\\u2019s clarity, practical utility, and adoption potential.\", \"questions\": \"1. Could you provide more specific implementation guidance or pseudocode for this metric, perhaps in an appendix? This would help ensure reproducibility and clarity for those looking to apply it.\\n2. The paper attributes label uncertainty to genuine perceptual differences, but could the authors discuss other potential sources, such as cultural or contextual biases among annotators? How might such biases affect the evaluation results, and could additional stratification methods help account for them? This would make the framework more applicable across diverse datasets and ensure its robustness in various contexts.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer 4 RFAo\", \"comment\": \"Thank you!\"}",
"{\"title\": \"Reviewer 4 - Q5\", \"comment\": \"1. Can you explain how you created the synthetic data for high uncertainty and whether it really reflects real-world data?\\n\\n\\nWe have also now included the code in the Appendix A.11 to clarify how the random simulated dataset was created. We have now also referred to it in RQ1. Here is the code we use to generate the synthetic dataset.\\n\\n```\\nimport random\\nimport numpy as np\\nimport random\\nimport pandas as pd\\n\\ndef synthetic_random_nominal_dataset():\\n \\\"\\\"\\\"\\n Simulates a random binary dataset with 2 human labellers.\\n The 2 humans can either (a) both pick 0 (b)both pick 1 (c) one picks 0 and the other picks 1 or vice versa\\n :return:\\n \\\"\\\"\\\"\\n dataset_size = 200\\n humans_2_one_pick_0_other_picks_1 = [random.sample([0, 1], 2) for _ in range(dataset_size // 2)]\\n humans_2_both_pick_1 = [[1, 1] for _ in range(dataset_size // 4)]\\n humans_2_both_pick_0 = [[0, 0] for _ in range(dataset_size // 4)]\\n\\n human_2_annotators_binary_simulated = humans_2_one_pick_0_other_picks_1 + humans_2_both_pick_1 + humans_2_both_pick_0\\n \\n random_labeller_choice = [np.random.choice([1, 0]) for _ in range(dataset_size)]\\n \\n # Final df\\n df = pd.DataFrame(data={\\\"human_labels\\\": human_2_annotators_binary_simulated,\\n \\\"random_labeller\\\": random_labeller_choice\\n })\\n\\n return df\\n\\ndef synthetic_random_ordinal_dataset():\\n \\\"\\\"\\\"\\n Simulates a random 3 way classification 1-2-3, with 2 human labellers.\\n The 2 humans can either (a) both pick 1 (b)both pick 2. and so on (c) disagree\\n :return:\\n \\\"\\\"\\\"\\n dataset_size = 600\\n humans_disagree = [random.sample([1, 2, 3], 2) for _ in range(dataset_size // 2)]\\n humans_agree_1 = [[1, 1] for _ in range(dataset_size // 6)]\\n humans_agree_2 = [[2, 2] for _ in range(dataset_size // 6)]\\n humans_agree_3 = [[3, 3] for _ in range(dataset_size // 6)]\\n\\n human_2_annotators_ordinal_simulated = humans_disagree + humans_agree_1 + humans_agree_2 + humans_agree_3\\n\\n random_labeller_choice = [np.random.choice([1, 2, 3]) for _ in range(dataset_size)]\\n\\n df = pd.DataFrame(data={\\\"human_labels\\\": human_2_annotators_ordinal_simulated,\\n \\\"random_labeller\\\": random_labeller_choice\\n })\\n\\n return df\\n```\\n\\n\\nThe aim of this dataset is to demonstrate how uncertainty can impact the results when solely relying on a single number. In addition, we have 4 additional datasets (Topic chat, DICES and QAGS) used in popular LLM as judge papers ) as a case study in the Appendix (section A.11) of our paper and referred in the main body in section 4.3. This is in addition to existing 6 datasets (SNLI, MNLI-matched, MNLI-mismatched, SummEval, Mt-bench and synthetic datasets ) on several models \\u2013 Mistral, Sonnet, LLama and GPT-4. \\n\\nOur findings hold across both synthetic and real world datasets -- in a stratified group with higher proportion of noisy or uncertain samples (as measured by low HH correlation), the HM correlation seems to outperform HH correlation.\"}",
"{\"title\": \"Response to reviewer 3 : 5\", \"comment\": \"5. Moreover, a side-by-side comparison with existing metrics like Krippendorff\\u2019s \\u03b1 would better illustrate the added value of the proposed metric\\n\\nTo address this suggestion, we created a toy example to demonstrate the comparative strengths of our JSD-based metric against existing metrics like Krippendorff\\u2019s \\u03b1. This example, detailed in Appendix A.12 . We also include the toy example below.\\n\\nConsider the example in the table below. If only used a single gold human label (human_median in this case, could also be human_majority) to compute correlation, the acceptable values that humans have chosen is lost. As a result, metrics such as Krippendorff will see treat any value that is equidistant from the human single \\u201cgold\\u201d label as acceptably similar. For instance, assume that humans choose \\u201cdisagree\\u201d or \\u201cneutral\\u201d (median/ majority value) (selections <2,3,3>). A good model chooses ''disagree\\\" and a bad model chooses \\u201cagree\\u201d (completely different to human choice), because both \\u201cdisagree\\u201d (Likert 2) and \\u201cagree\\u201d (Likert 4) are equidistant from the median/majority value (Likert 3- Median value), K-alpha has assigned very similar scores for the model that has assigned \\u201cdisagree\\u201d (Likert 2) and the model that has assigned \\u201cagree\\u201d (Likert 4). Rank correlation metrics, in addition to their misfit in comparing item level Likert scores already discussed in the paper in section RQ2, also have a similar problem and deems the model that is poor as the better model as shown below. Our proposed approach on the other hand, assigned lower (better for JSD) scores to the better model as the \\\"good\\u201d model assigns values that the humans have chosen, compared to the poor model that has predicted a different score altogether. \\n\\n| humans | human_median | model_good | model_poor |\\n|:----------|---------------:|-------------:|-------------:|\\n| [2, 2, 3] | 2 | 3 | 1 |\\n| [1, 2, 2] | 2 | 1 | 3 |\\n| [2, 3, 3] | 3 | 2 | 4 |\\n\\nMetric results \\n| metric | model_good | model_poor | Does better model score better |\\n|----------|--------------|--------------|----------------------------------|\\n| Tau | 0 | 0.82 | False |\\n| Rho | 0 | 0.87 | False |\\n| K-alpha | -0.06 | -0.06 | False |\\n| JS_b | 0.56 | 0.65 | True |\\n\\n\\n **Note** Lower JSD (more similar hence indicates better model) as JSD is a divergence metric, like distance. In addition, ranges between 0 (min) and 1 max, for log to the base 2 as indicated in explanation for equation 2. \\n\\nThe example above also exemplifies how unless we know apriori which is a better model, it is difficult to identify advantages / shortcomings with correlation measurements including the proposed binned-jsd. Effectiveness of metrics depend on the data. We dont know for certain if a model appears to be better/worse because of gaps in metrics, creating a chicken and egg problem in measuring the effectiveness of a metric itself. Not knowing which metric is appropriate is a common problem when it comes to correlation metrics [ Hove2018 ], including problems with Cohen\\u2019s Kappa [see Krippendorff 2004, cited over 4000 times] despite it being commonly used including many LLM as judge papers. \\n\\n Hence, our recommendation in section 4.3 \\u201cRecommendations for reporting effectiveness of automated methods\\u201d - of stratification, visualization and multi-metric reporting so we can interpret the strengths and gaps in the metrics. In particular, we suggest *\\u201c2. Multi-metric reporting: If there was no uncertainty, measures such as F1 would have worked. However, as a result of uncertainty, no single metric can capture important insights about every type of data as demonstrated in Sections 3.1, 3.2 and 3.3. Thus, we recommend reporting on multiple metrics belonging to different families, such as chance and non-chance-adjusted measures, so each metric in its own way can assist in bringing the less obvious\\u201d.* \\n\\n\\n\\nIt is important to note that our goal is not to claim superiority over existing metrics but to offer a holistic perspective and a set of tools to analyze discrepancies between human and machine labels comprehensively.\"}",
"{\"title\": \"Comments after rebuttal\", \"comment\": \"Thank you for your answers and clarifications. I have updated my scores reflecting the current information.\"}",
"{\"title\": \"Further clarification\", \"comment\": \"Thanks a lot for your extensive reply, adding more datasets, and clarifying the metric details. I have two more clarifications (hopefully the latest):\\n\\nIn your second table, the column titled \\\"Does better model score better\\\" shows that it is True for JS_b. But its value is lower for good_model. Should this also be False? Did I miss something in your reply? \\n\\nI will be happy if you also clarify the last example. It seems to me that bin 2 has no correspondence in the table. \\n\\nThese toy examples are good for illustrating the metric and the problems occurring in the literature. It would be super good to include them in the paper. \\n\\nI am mostly happy with your reply and the extensive effort you made. I will most probably increase my ratings. \\n\\nThanks!\"}",
"{\"title\": \"Response to reviewer 1 - question 1\", \"comment\": \"1. It is not clear to me how this new metric handles the issues raised with traditional metrics. Could the author clarify and show cases of how JSD improves the analysis of LLMs as judges when compared to human judgment? It seems like a promising direction, but I am not convinced due to the limited number of datasets and support of the authors' claim.\\n\\nThank you for your comments to improve the clarity of the paper. The use of binned-JSD solves a specific problem, where a single majority label is not sufficient to represent the human preference as mentioned in section 4.2. \\n\\nThe advantage of binned-JSD can be demonstrated using a toy example (also now added to the appendix A.12 and referred in the paper in section RQ2) as follows. if only used a single gold human label (human_median in this case, could also be human_majority) to compute correlation, the acceptable values that humans have chosen is lost. As a result, metrics such as Krippendorff will see treat any value that is equidistant from the human single \\u201cgold\\u201d label as acceptably similar. For instance, assume that humans choose \\u201cdisagree\\u201d or \\u201cneutral\\u201d (median/ majority value) (selections <2,3,3>). A good model chooses ''disagree\\\" and a bad model chooses \\u201cagree\\u201d (completely different to human choice), because both \\u201cdisagree\\u201d (Likert 2) and \\u201cagree\\u201d (Likert 4) are equidistant from the median/majority value (Likert 3- Median value), K-alpha has assigned very similar scores for the model that has assigned \\u201cdisagree\\u201d (Likert 2) and the model that has assigned \\u201cagree\\u201d (Likert 4). Rank correlation metrics, in addition to their misfit in comparing item level Likert scores already discussed in the paper in section RQ2, also have a similar problem and deems the model that is poor as the better model as shown below. Our proposed approach on the other hand, assigned lower (better for JSD) scores to the better model as the \\\"good\\u201d model assigns values that the humans have chosen, compared to the poor model that has predicted a different score altogether. \\n\\n| humans | human_median | model_good | model_poor |\\n|:----------|---------------:|-------------:|-------------:|\\n| [2, 2, 3] | 2 | 3 | 1 |\\n| [1, 2, 2] | 2 | 1 | 3 |\\n| [2, 3, 3] | 3 | 2 | 4 |\\n\\nMetric results \\n| metric | model_good | model_poor | Does better model score better |\\n|----------|--------------|--------------|----------------------------------|\\n| Tau | 0 | 0.82 | False |\\n| Rho | 0 | 0.87 | False |\\n| K-alpha | -0.06 | -0.06 | False |\\n| JS_b | 0.56 | 0.65 | True |\\n\\n\\n\\nThe example above also exemplifies how unless we know apriori which is a better model, it is difficult to identify advantages / shortcomings with correlation measurements including the proposed binned-jsd. Effectiveness of metrics depend on the data. We dont know for certain if a model appears to be better/worse because of gaps in metrics, creating a chicken and egg problem in measuring the effectiveness of a metric itself. Not knowing which metric is appropriate is a common problem when it comes to correlation metrics [ Hove2018 ], including problems with Cohen\\u2019s Kappa [see Krippendorff 2004, cited over 4000 times] despite it being commonly used including many LLM as judge papers. \\n\\n Hence, our recommendation in section 4.3 \\u201cRecommendations for reporting effectiveness of automated methods\\u201d - of stratification, visualization and multi-metric reporting so we can interpret the strengths and gaps in the metrics. In particular, we suggest *\\u201c2. Multi-metric reporting: If there was no uncertainty, measures such as F1 would have worked. However, as a result of uncertainty, no single metric can capture important insights about every type of data as demonstrated in Sections 3.1, 3.2 and 3.3. Thus, we recommend reporting on multiple metrics belonging to different families, such as chance and non-chance-adjusted measures, so each metric in its own way can assist in bringing the less obvious\\u201d.* \\n\\nThrough this paper and the arguments we make, we would like to encourage the research community to take a deeper look at metrics to understand the gaps between metrics vs the reality of comparing machine with human judgements as a result of uncertainty. \\n\\n \\n\\n [1] Klaus Krippendorff, Reliability in Content Analysis: Some Common Misconceptions and Recommendations, Human Communication Research, Volume 30, Issue 3, July 2004, Pages 411\\u2013433, https://doi.org/10.1111/j.1468-2958.2004.tb00738.x \\n\\n [2] Debby ten Hove, Terrence D. Jorgensen, and L. Andriesvan der Ark. 2018. On the usefulness of interrater reliability coefficients. In Quantitative Psychology, pages 67\\u201375, Cham. Springer International Publishing\"}",
"{\"title\": \"Rebuttal summary\", \"comment\": \"We would like to thank the reviewers for providing us with positive reviews and valuable feedback to enhance the clarity of our paper.\\n\\nOur key objectives of this paper have been to highlight\\n\\n1. How **aggregate correlation scores can misguide researchers** into concluding that LLM-as-Judge approximates human majority where human majority-machine correlation scores can seem higher than human-human correlation. In this paper, we show over multiple datasets that this is easier to achieve when the proportion of samples with human label uncertainty is quite high. As the proportion of samples with consistent human labels increase, this assumption that LLM-as-Judge approximates human majority falls through. In addition to using benchmark datasets, we also use a synthetic dataset to show how under high uncertainty, even a random labeller can appear to approximate human majority. Hence, our main takeaway is that human label uncertainty cannot be overlooked when measuring LLM-as-judge capabilities.\\n\\n2. This finding led us to the follow-on question on **how to measure LLM-as-judge capabilities, when the task is inherently subjective** and humans are likely to vary and consistent labels is not practically possible as there is no single ground truth / gold answer. To mitigate this, we propose a binned-JSD as a step in this direction to compare human labels with machine labels without assuming that a single gold label captures human choice.\\n\\n3. A third contribution of this paper, is highlighting how all correlation metrics rely on some key assumptions about the nature of the human labellers and that no single metric is a perfect metric, including the proposed JSD. This problem exacerbates with LLMs, as their nature is unpredictable. Each metric has its own strength and weaknesses and highlights certain aspects of the underlying data. Hence, we **recommend that researchers stratify the results by uncertainty, perform multi metric reporting and visualize the results to interpret aggregate numbers appropriately**. The proposed perception charts are a step in this direction to visualize notoriously challenging correlation numbers. \\n\\nThe key themes of the feedback, now addressed, we received from our reviewers has been\\n\\n1. **The value of the proposed binned-JSD is not clear** - We have now illustrated this using a toy example in Appendix A.12. We have also included a python implementation of binned-JSD for reproducibility in Appendix A.7. \\n\\n2. **How practitioners can use our framework in real world scenarios is not clear** - We have included a case study (Appendix A.11) using 4 additional datasets (to add to the existing 5 datasets included in the initial version of the paper) used in popular papers to highlight how the conclusion that the machine is better or worse can be misleading as a result of either the human uncertainty or metric unsuitability. \\n\\n3. **Implementation details not clear - The stratification threshold and how the synthetic data with random labeller was created** - We have now included details of how we stratify by all possible number of votes that a majority or median label (e.g. 5/5, 4/5,3/5 ..) can receive and along with results across all partitions in Appendix In Table 5 and Table 6. We have included a python implementation of how the synthetic dataset was created in Appendix A10.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Lets us know if you have any further questions. Thank you for reviewing our paper !\"}",
"{\"title\": \"Reviewer 4 - W2\", \"comment\": \"2. jensen-Shannon Divergence as a measure of perception-based tasks is promising, but its effectiveness is not thoroughly proven.\\n\\nThank you for your question. Various reviewers have asked the same question.\\n\\nHere is our explanation. \\nThe use of binned-JSD solves a specific problem, where a single majority label is not sufficient to represent the human preference, as mentioned in section 4.2. \\n\\nConsider the example in the table below. If only used a single gold human label (human_median in this case, could also be human_majority) to compute correlation, the acceptable values that humans have chosen is lost. As a result, metrics such as Krippendorff will see treat any value that is equidistant from the human single \\u201cgold\\u201d label as acceptably similar. For instance, assume that humans choose \\u201cdisagree\\u201d or \\u201cneutral\\u201d (median/ majority value) (selections <2,3,3>). A good model chooses ''disagree\\\" and a bad model chooses \\u201cagree\\u201d (completely different to human choice), because both \\u201cdisagree\\u201d (Likert 2) and \\u201cagree\\u201d (Likert 4) are equidistant from the median/majority value (Likert 3- Median value), K-alpha has assigned very similar scores for the model that has assigned \\u201cdisagree\\u201d (Likert 2) and the model that has assigned \\u201cagree\\u201d (Likert 4). Rank correlation metrics, in addition to their misfit in comparing item level Likert scores already discussed in the paper in section RQ2, also have a similar problem and deems the model that is poor as the better model as shown below. Our proposed approach on the other hand, assigned lower (better for JSD) scores to the better model as the \\\"good\\u201d model assigns values that the humans have chosen, compared to the poor model that has predicted a different score altogether. \\n\\n| humans | human_median | model_good | model_poor |\\n|:----------|---------------:|-------------:|-------------:|\\n| [2, 2, 3] | 2 | 3 | 1 |\\n| [1, 2, 2] | 2 | 1 | 3 |\\n| [2, 3, 3] | 3 | 2 | 4 |\\n\\nMetric results \\n| metric | model_good | model_poor | Does better model score better |\\n|----------|--------------|--------------|----------------------------------|\\n| Tau | 0 | 0.82 | False |\\n| Rho | 0 | 0.87 | False |\\n| K-alpha | -0.06 | -0.06 | False |\\n| JS_b | 0.56 | 0.65 | True |\\n\\n\\n **Note** Lower JSD (more similar hence indicates better model) as JSD is a divergence metric, like distance. In addition, ranges between 0 (min) and 1 max, for log to the base 2 as indicated in explanation for equation 2. \\n\\nThe example above also exemplifies how unless we know apriori which is a better model, it is difficult to identify advantages / shortcomings with correlation measurements including the proposed binned-jsd. Effectiveness of metrics depend on the data. We dont know for certain if a model appears to be better/worse because of gaps in metrics, creating a chicken and egg problem in measuring the effectiveness of a metric itself. Not knowing which metric is appropriate is a common problem when it comes to correlation metrics [ Hove2018 ], including problems with Cohen\\u2019s Kappa [see Krippendorff 2004, cited over 4000 times] despite it being commonly used including many LLM as judge papers. \\n\\n Hence, our recommendation in section 4.3 \\u201cRecommendations for reporting effectiveness of automated methods\\u201d - of stratification, visualization and multi-metric reporting so we can interpret the strengths and gaps in the metrics. In particular, we suggest *\\u201c2. Multi-metric reporting: If there was no uncertainty, measures such as F1 would have worked. However, as a result of uncertainty, no single metric can capture important insights about every type of data as demonstrated in Sections 3.1, 3.2 and 3.3. Thus, we recommend reporting on multiple metrics belonging to different families, such as chance and non-chance-adjusted measures, so each metric in its own way can assist in bringing the less obvious\\u201d.* \\n\\n\\n**In terms of the use of Wasserstein Distance**, one of the challenges, is that it cannot be used for categorical values as the distance between 2 categorical values is meaningless. JSD only relies on probability of values hence can be used for categorical values as well. However, it does not mean we cannot customize Wassterstein - it can be obviously explored and adapted further, possibly for ordinal values where we plug bespoke distance for different points ( e.g., Likert values agree (4) \\u2013 strongly agree (5), vs neutral (3) \\u2013 agree (4), as distance between 2 points may not be equidistant ) . As mentioned in our paper, there is no one metric that works for all. This highlights the central theme of the paper: no single metric can fully capture the complexities of comparing human labelers with LLM-based auto labelers.\"}",
"{\"title\": \"Reviewer 4 - Q7\", \"comment\": \"1. How would these findings help practitioners improve real-world model evaluations?\\n\\nWe have *expanded Appendix A.11 to include a case study with four additional datasets (Topic chat, DICES and QAGS)*, This is in addition to existing 6 datasets (SNLI, MNLI-matched, MNLI-mismatched, SummEval, Mt-bench and synthetic datasets ) on several models \\u2013 Mistral, Sonnet, LLama and GPT-4 . These datasets are used in popular LLM as a judgepaper to draw conclusions of the LLM capabilities. we summaries the findings of the casestudy ( detailed numbers in the Appendix A.11) \\n\\n- **Effects of Multi-metric reporting:** On the Topical Chat (TC) dataset, for understandable criteria the aggregate number (Krippendorff-$\\\\alpha$ -0.01) of $H^wM^w$ score of -0.01 *superficially seems to imply* that HM correlation is low as shown in Table6. However, percentage agreement (score 0.97) and Randolph-$\\\\kappa$ (score 0.93) score quite highly, indicating that class imbalance has substantially lowered Krippendorff-$\\\\alpha$ pretty close to 0.0. Also note that over 96\\\\% of the samples have perfect human agreement, however the overall HH Krippendorff-$\\\\alpha$ is quite low scoring -0.01. This effect of how various chance adjusted metrics impact correlation scores is also discussed in detail in section 4.3.\\n\\n - **Impact of stratification** When we compare the overall performance (column All in Table-6 on dataset TC (criteria understandable) with dataset QAGS, Randolph-$\\\\kappa$ drops substantially by 19 points (0.93 $\\\\rightarrow$ 0.74). However, QAGS dataset has around 66\\\\% of the samples that have perfect human agreement, while TC has 96\\\\% of the sample with human agreement. When we compare the samples with perfect human agreement between the 2 datasets, the model performance gap reduces to just 6 points (0.93 $\\\\rightarrow$ 0.87), pointing to how comparing datasets with different proportion of uncertain samples can affect our conclusion ( in this case *incorrectly that the model performance is substantially lower in QAGS compared to TC*). With the DICES dataset (crowdsourced with over 100 annotations per item, with no perfect agreement items), on the other hand, the model seems to struggle across all metrics and stratification groups, indicating much deeper investigation is required. The general trend of where in a stratified group with relatively higher proportions of noisy or uncertain samples (as measured by low HH correlation), the $H^wM^w$ correlation seems to outperform HH correlation as \\\\mycolorbox{LightBlue}{highlighted}, as shown in Table 8} also applies, indicating to how \\\\textit{models can superficially appear to approximate human majority when proportion samples of uncertainty is relatively higher} as discussed in Section RQ1. \\n\\n Our framework, by pinpointing the source of discrepancies, can guide practitioners toward addressing problems on the appropriate side and draw the right conclusion instead of inferring (sometimes erroneously ) from a single aggregate that either models are better or that models are inadequate as demonstrated in our paper.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you so much!\"}",
"{\"title\": \"Response to reviewer 3 - 1-2\", \"comment\": \"Thank you so much for the positive feedback of our paper\\n\\n1. Could you provide more specific implementation guidance or pseudocode for this metric, perhaps in an appendix? This would help ensure reproducibility and clarity for those looking to apply it. \\n \\n\\nThank you for the suggestion. We are also pursuing high reproducibility and would like the community to try our metrics and visualization tools. We already updated our submission to include **python implementation** and a toy example to demonstrate the computation process in Appendix A.7. Opensource is also work in progress. The toy example in the appendix A.7 is as follows : \\n\\n\\n| Item Id | humans | human_median (Bin) | model |\\n|---:|:----------|---------------:|-------------:|\\n| A | [2, 2, 3] | 2 | 3,2 |\\n| B | [1, 2, 2] | 2 | 1,1|\\n| C | [2, 3, 3] | 3 | 2,2 |\\n\\n\\n\\n \\n Values compared in Bin 2 = H(item A + item B), M (item A + item B) \\n = H([2, 2, 3] + [1,2,2]), M([3,2] + [1,1])\\n = H([2,2,3,1,2,2]), M([3,2,1,1])\\n\\n Comparing the probability distribution of values between H and M for Bin 2, assume Llikert scale 1- 3, where the index represents the Likert value and the index value corresponds to the probability of that value:\\n \\n Bin 2 JSD(H, M) = JSD(H[1/6, 4/6, 1/6], M[2/4, 1/4, 1/4])\\n\\n\\nTranslating this to a python library call shown below, would result in score , $JSD_{b2}$ = 0.31\\n\\n\\n from scipy.spatial import distance\\n distance.jensenshannon([1/6, 4/6, 1/6], [2/4, 1/4, 1/4]))\\n\\nSimilarly, for Bin 3\\n\\n Values compared in Bin 3 = H(item C), M( item C) \\n = H([2,3,3), M([2, 2])\\n\\n Bin 3 JSD( H, M) = JSD(H[0, 1/3, 2/3], M[0, 2/2, 0])\\n\\nThis would result in $JSD_{b3}$ = 0.56\\n\\n \\n \\n\\nTotal binned JSD is the weighted sum of number of samples in each bin, where $Bin_2$ contains 2 samples A and B, $Bin_3$ contains 1 sample C\\n \\n\\nBinned JSD = 2/3 * $JSD_{b2}$ + 1/3 * $JSD_{b3}$ = 2/3 * 0.31 + 1/3 * 0.56 = 0.39\\n\\n\\n\\n \\n\\n2. The paper attributes label uncertainty to genuine perceptual differences, but could the authors discuss other potential sources, such as cultural or contextual biases among annotators? How might such biases affect the evaluation results, and could additional stratification methods help account for them? This would make the framework more applicable across diverse datasets and ensure its robustness in various contexts. \\n \\n\\nThank you for this insightful suggestion. We agree that cultural or contextual biases among annotators could indeed contribute to label uncertainty, in addition to perceptual differences. Unfortunately, in most datasets, detailed annotator demographic or contextual information is unavailable, limiting our ability to explicitly analyze these factors. As a result, we broadly attribute uncertainty to \\u201cperceptual differences\\u201d as a working assumption. \\n\\nHowever, we acknowledge the importance of exploring cultural and contextual biases to enhance the framework\\u2019s applicability across diverse datasets. If large-scale annotator demographic data, including labels, were available, our proposed methods could be extended for fine-grained analyses. For instance, by stratifying the data by demographic attributes or combinations such as (demographic information, human median label), we could visualize or quantify potential biases. This would allow us to assess whether machine predictions align more closely with specific cultural or demographic groups. We appreciate your suggestion and plan to explore these directions in future work.\"}",
"{\"title\": \"Reviewer 4 - Q1- Q4\", \"comment\": \"1. How do you decide what's \\\"high\\\" or \\\"low\\\" uncertainty in your stratification, and did you try other thresholds?\\n\\nWe have now addressed this in the response to weakness 1. Let us know if you have more questions. \\n\\n2. Why did you choose Jensen-Shannon Divergence for human perception, and can you show this is better than existing metrics? \\n\\nWe have now addressed this in the response to weakness 2.\\n\\n3. How did you adapt Krippendorff\\u2019s \\u03b1 and similar metrics to account for systematic machine errors, not just random ones? \\n\\n We mention the challenges in Section 4.1 - CHALLENGES IN METRICS AND INTERPRETABILITY -- *Errors made by LLMs are rarely predictable, yet they are not random; rather, they are reproducible, making them systematic errors. The unpredictable nature of LLMs makes it difficult to design an effective metric that compares them with humans, given the uncertainty associated with human label*. \\n\\nHence as a workaround, we propose stratification, visualization and multi-metric reporting detecting systematic errors in machine labels as mentioned in section 4.3\\n\\n4. Why use previous prompts without optimizing them for each model, wouldn't this affect fairness in comparisons?\\n\\nAs mentioned in section 3 - Analysis and settings, we reuse the original G-Eval results (Liu et al., 2023b) which rates the quality of summaries on a scale of 1-5 and we assumed that the *original authors have optimized the results for GPT4*. We also rely on the existing results on preference data on MT-Bench and GPT-4 from Zheng et al. (2023), that were presumably optimized for GPT-4. Our experimental results on other models further demonstrate that regardless of how / if the prompts are optimized our central theme and findings hold as discussed in our final section - RECOMMENDATIONS FOR REPORTING EFFECTIVENESS OF AUTOMATIC METHODS\\n\\na) Stratification by uncertainty levels: As discussed in Sec. 3.1, uncertainty in human labels can\\nobfuscate performance gaps between machines and human evaluators. Hence, we strongly recom-\\nmend stratifying results by uncertainty proportions.\\n\\nb). Multi-metric reporting: If there was no uncertainty, measures such as F1 would have worked.\\nHowever, as a result of uncertainty, no single metric can capture important insights about every type\\nof data as demonstrated in Sections 3.1, 3.2 and 3.3. Thus, we recommend reporting on multiple\\nmetrics belonging to different families, such as chance and non-chance-adjusted measures, so each\\nmetric in its own way can assist in bringing the less obvious but critical aspects about the underlying\\ndata to the forefront.\\n\\nc). Visualization of results: A single non-parametric aggregate metric can rarely capture the entirety of underlying raw data, and hence visualization is key to understanding performance gaps, as\\ndiscussed in Section 3.3. The proposed perception charts are a step towards making aggregate cor-\\nrelation more interpretable, as well as highlighting the strengths and gaps of the automatic labelellers\"}",
"{\"comment\": \"Thank you! I have adjusted my review and recommendation!\"}",
"{\"metareview\": \"This paper addresses the challenges of using aggregate correlation scores to evaluate the performance of LLMs as judges in subjective tasks. The authors argue that high human label uncertainty can misleadingly make LLMs appear to align closely with human majority labels, even more than human-human agreement. The experiments based on both benchmark and synthetic datasets, aim to show that as human label consistency increases, this assumption breaks down, emphasizing the importance of accounting for label uncertainty. The authors also introduce a binned JSD metric as an alternative and make additional suggestions to use multi-metric reporting and stratification.\\n\\nThe reviewers are mostly positive, only one reviewer rates the paper marginally below acceptance threshold. Most of the criticism is around value of the binned-jsd metric and implementation / how a practitioner would use the methodology.\\n\\nMy recommendation is based on reviewer feedback which is mostly positive. The one critical reviewer does not engage much beyond the review and acknowledging the rebuttal.\", \"additional_comments_on_reviewer_discussion\": \"The authors respond to the reviews in a detailed manner and it most of the reviewers seem satisfied with their response.\"}",
"{\"summary\": \"This paper examines how human uncertainty affects evaluating generative models, noting that standard metrics like Krippendorff\\u2019s \\u03b1 may misrepresent machine accuracy when human judgments vary. The authors propose three main contributions: stratifying evaluation results by human uncertainty levels, introducing binned Jensen-Shannon Divergence (JSb) to better measure alignment with human perception, and creating perception charts to visualize evaluation performance more effectively. These tools aim to provide a clearer, more accurate picture of machine evaluation performance amidst human variability.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Originality: This paper takes an original approach by addressing human uncertainty in generative model evaluation. Introducing stratified results by human variability and the new JSb metric for perception-based tasks adds fresh methods to handle subjectivity in evaluations. The perception charts also offer an innovative way to visualize nuanced performance differences.\", \"quality\": \"The work is methodologically sound, with comprehensive experiments across datasets like SummEval and MNLI. The use of both real and synthetic data strengthens the empirical basis, showcasing the impact of human judgment noise on evaluation reliability.\", \"clarity\": \"The paper is well-organized, clearly defining key concepts such as HH vs. HM correlation. Explanations of JSb and perception charts are straightforward, helping readers understand the new evaluation tools effectively.\", \"significance\": \"This work fills an important gap by addressing subjective variability in human evaluations. Its proposed methods (if widely adopted) could lead to more accurate and relevant model evaluations, especially in perception-driven tasks across AI.\", \"weaknesses\": [\"The choice to stratify results based on \\u201chigh\\u201d and \\u201clow\\u201d human uncertainty needs clearer justification. A discussion or empirical test on how these thresholds were set would make the stratification process more robust and reproducible.\", \"The introduction of Jensen-Shannon Divergence as a measure of perception-based tasks is promising, but its effectiveness is not thoroughly proven. Including a comparison with other potential metrics, such as Earth Mover\\u2019s Distance or Wasserstein Distance, would better validate the claim that JSb captures human perception more accurately.\", \"-While reusing prior prompts is convenient, this may introduce biases or inconsistencies across models. Optimizing prompts specifically for each model would yield more accurate comparisons, especially given the importance of prompt sensitivity in LLM performance. Adding prompt-tuning experiments for each model could further solidify the findings.\", \"-The synthetic data used to simulate high uncertainty scenarios lacks details on its generation process. More transparency on how closely this data reflects real-world scenarios, including its validation process, would help verify the relevance of the findings.\", \"The shifts in \\u2206 (difference between HH and HM correlations) across different uncertainty levels are intriguing but underexplored. An in-depth analysis of these shifts\\u2014perhaps with concrete examples of where machines diverge from human judgment\\u2014would help understand the implications of these results more clearly. Visual examples showing alignment and divergence between human and machine judgments could greatly enhance interpretability.\", \"The experiments rely primarily on four datasets, which, while varied, still represent a narrow slice of possible applications.\"], \"questions\": [\"How do you decide what's \\\"high\\\" or \\\"low\\\" uncertainty in your stratification, and did you try other thresholds?\", \"Why did you choose Jensen-Shannon Divergence for human perception, and can you show this is better than existing metrics?\", \"How did you adapt Krippendorff\\u2019s \\u03b1 and similar metrics to account for systematic machine errors, not just random ones?\", \"Why use previous prompts without optimizing them for each model, wouldn't this affect fairness in comparisons?\", \"Can you explain how you created the synthetic data for high uncertainty and whether it really reflects real-world data?\", \"Could you provide concrete examples where machine evaluations diverge significantly from human judgments?\", \"How would these findings help practitioners improve real-world model evaluations?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer 4 - Q6\", \"comment\": \"1. Could you provide concrete examples where machine evaluations diverge significantly from human judgments?\\n\\nWe have now added concrete examples in the appendix, Table 11 from SNLI dataset, where humans have perfect agreement (5/5 votes for the majority label), but the LLM predicts a different label.\\nHere are some of those examples\", \"example_1\": [\"Premise: A brown a dog and a black dog in the edge of the ocean with a wave under them boats are on the water in the background.\", \"Hypothesis: The dogs are swimming among the boats.\", \"Human Labels: entailment; entailment; entailment; entailment; entailment\", \"Machine Labels: neutral; neutral; neutral; neutral; neutral\", \"Example 2\", \"Premise: A young child is jumping into the arms of a woman wearing a black swimming suit while\", \"in a pool.\", \"Hypothesis: Mother catching her son in a pool.\", \"Human Labels: neutral; neutral; neutral; neutral; neutral\", \"Machine Labels: entailment; entailment; entailment; entailment; entailment\"], \"example_3\": [\"Premise: Two women are embracing while holding to go packages.\", \"Hypothesis: The men are fighting outside a deli.\", \"Human Labels: contradiction; contradiction; contradiction; contradiction; contradiction\", \"Machine Labels: neutral; neutral; neutral; neutral; neutral\", \"Example 4\", \"Premise: Two young children in blue jerseys, one with the number 9 and one with the number 2 are\", \"standing on wooden steps in a bathroom and washing their hands in a sink.\", \"Hypothesis: Two kids in jackets walk to school.\", \"Human Labels: contradiction; contradiction; contradiction; contradiction; contradiction\", \"Machine Labels: neutral; neutral; neutral; neutral; neutral\", \"Example 5\", \"Premise: Three women in dress suits walk by a building.\", \"Hypothesis: Three women are traveling by foot.\", \"Human Labels: entailment; entailment; entailment; entailment; entailment\", \"Machine Labels: neutral; neutral; neutral; neutral; neutral\"]}",
"{\"comment\": \"Thank you for your replies and updates! I have adjusted my review.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response to further clarification\", \"comment\": \"Thank you so much for the positive feedback.\\n\\n1. In your second table, the column titled \\\"Does better model score better\\\" shows that it is True for JS_b. But its value is lower for good_model. Should this also be False?\\n\\n\\nBinned-JSD, being based on JSD, is a divergence metric (like distance), with 0 smallest ''distance'' and 1 being the maximum. Hence, lower scores indicate smaller distance, implying more similar distributions. When comparing humans and machines, lower $JS_b$ score is better, indicating more similar distribution. We have now emphasized this in the paper, under equation 2 in RQ2 -\\u201cSince JSD is like a distance-based measure, lower scores are better because they indicate that the human and machine judgments are similar.\\n\\n2. Clarify the last example. It seems to me that bin 2 has no correspondence in the table.\\n\\n\\n| Item Id | humans | human_median (Bin) | model |\\n|---:|:----------|---------------:|-------------:|\\n| A | [2, 2, 3] | 2 | 3,2 |\\n| B | [1, 2, 2] | 2 | 1,1|\\n| C | [2, 3, 3] | 3 | 2,2 |\\n\\n\\n\\n \\n Values compared in Bin 2 = H(item A + item B), M (item A + item B) \\n = H([2, 2, 3] + [1,2,2]), M([3,2] + [1,1])\\n = H([2,2,3,1,2,2]), M([3,2,1,1])\\n\\n Comparing the probability distribution of values between H and M for Bin 2, assume Llikert scale 1- 3, where the index represents the Likert value and the index value corresponds to the probability of that value:\\n \\n Bin 2 JSD(H, M) = JSD(H[1/6, 4/6, 1/6], M[2/4, 1/4, 1/4])\\n\\n\\nTranslating this to a python library call shown below, would result in score , $JSD_{b2}$ = 0.31\\n\\n\\n from scipy.spatial import distance\\n distance.jensenshannon([1/6, 4/6, 1/6], [2/4, 1/4, 1/4]))\\n\\nSimilarly, for Bin 3\\n\\n Values compared in Bin 3 = H(item C), M( item C) \\n = H([2,3,3), M([2, 2])\\n\\n Bin 3 JSD( H, M) = JSD(H[0, 1/3, 2/3], M[0, 2/2, 0])\\n\\nThis would result in $JSD_{b3}$ = 0.56\\n\\n \\n \\n\\nTotal binned JSD is the weighted sum of number of samples in each bin, where $Bin_2$ contains 2 samples A and B, $Bin_3$ contains 1 sample C\\n \\n\\nBinned JSD = 2/3 * $JSD_{b2}$ + 1/3 * $JSD_{b3}$ = 2/3 * 0.31 + 1/3 * 0.56 = 0.39\\n\\n\\nWe have now included these examples in the Appendix A.7 along with the full code.\"}",
"{\"title\": \"Response to reviewer 3 - 3-4\", \"comment\": \"General response to weaknesses\\n\\n3. Additionally, testing the framework on a broader range of generative models beyond text (such as image or audio) would demonstrate its versatility\\n\\n Thank you for the suggestion. Our work was initially motivated by the widespread use of LLMs-as-a-Judge and our observations of their associated challenges. While our experiments focus on text-based generative models, we emphasize that the proposed method is modality-agnostic. Since the method does not rely on textual features, it can be directly extended to other modality, such as images. We have also made efforts to open-source our implementation to facilitate its broader adoption and adaptation for diverse modalities. \\n\\n2. The perception charts are helpful, but they primarily show aggregate trends, which may obscure individual item-level discrepancies; adding item-level visualizations or error bars could improve clarity. \\n\\nWe appreciate the suggestion. In section 4.2 we mention that \\\"Effective visualization is a trade-off between plotting every single data point (too much information that is hard to synthesize) and an aggregate view (summarized view where key information might be obscured). \\\" Prior to proposing the perception charts, we initially attempted to use existing BlandAltman plots (https://www.ajo.com/article/s0002-9394(08)00773-3/fulltext), (item level plot) to visualize our data. but it is very difficult to synthesize, same <x,y> values tend to overlap etc, too much information also meant we couldn't find any patterns. That lead us to create the proposed set of plots. We agree that our aggregate view may not cover all the problems, however a reasonable workaround might be plot for subset of the dataset to zoom in on the problem area and when the subset size is small enough item level plots such as BlandAltman may become useful. \\n\\n4. To connect more concretely with real-world applications, the paper could benefit from case studies or examples where the framework enhances specific generative tasks, such as in recommender systems.\\n\\nWe have expanded Appendix A.11 to include a case study with four additional datasets (Topic chat, DICES and QAGS). These datasets are used in popular LLM as a judgepaper to draw conclusions of the LLM capablities. we summaries the findings of the casestudy ( detailed numbers in the Appendix A.11) \\n\\n- **Effects of Multi-metric reporting:** On the Topical Chat (TC) dataset, for understandable criteria the aggregate number (Krippendorff-$\\\\alpha$ -0.01) of $H^wM^w$ score of -0.01 *superficially seems to imply* that HM correlation is low as shown in Table6. However, percentage agreement (score 0.97) and Randolph-$\\\\kappa$ (score 0.93) score quite highly, indicating that class imbalance has substantially lowered Krippendorff-$\\\\alpha$ pretty close to 0.0. Also note that over 96\\\\% of the samples have perfect human agreement, however the overall HH Krippendorff-$\\\\alpha$ is quite low scoring -0.01. This effect of how various chance adjusted metrics impact correlation scores is also discussed in detail in section 4.3.\\n\\n- **Impact of stratification** When we compare the overall performance (column All in Table-6 on dataset TC (criteria understandable) with dataset QAGS, Randolph-$\\\\kappa$ drops substantially by 19 points (0.93 $\\\\rightarrow$ 0.74). However, QAGS dataset has around 66\\\\% of the samples that have perfect human agreement, while TC has 96\\\\% of the sample with human agreement. When we compare the samples with perfect human agreement between the 2 datasets, the model performance gap reduces to just 6 points (0.93 $\\\\rightarrow$ 0.87), pointing to how comparing datasets with different proportion of uncertain samples can affect our conclusion ( in this case *incorrectly that the model performance is substantially lower in QAGS compared to TC*). With the DICES dataset (crowdsourced with over 100 annotations per item, with no perfect agreement items), on the other hand, the model seems to struggle across all metrics and stratification groups, indicating much deeper investigation is required. The general trend of where in a stratified group with relatively higher proportions of noisy or uncertain samples (as measured by low HH correlation), the $H^wM^w$ correlation seems to outperform HH correlation as \\\\mycolorbox{LightBlue}{highlighted}, as shown in Table 8} also applies, indicating to how \\\\textit{models can superficially appear to approximate human majority when proportion samples of uncertainty is relatively higher} as discussed in Section RQ1. \\n\\n Our framework, by pinpointing the source of discrepancies, can guide practitioners toward addressing problems on the appropriate side and draw the right conclusion instead of inferring (sometimes erroneously ) from a single aggregate that either models are better or inadequate as demonstrated in our paper.\"}",
"{\"title\": \"Response to reviewer 2 - Q1-3\", \"comment\": \"1. Some terms, like H^W R^W (^ indicates apex), are defined in the caption of caption 1. Probably they should be defined in the text and used in the table.\\n\\nThank you for pointing this out. We have updated the text in RQ1 to indicate it as follows -- \\u201cAt surface level, HM correlation seems to improve with human certainty, as shown in Table 1 \\u2013 column $H^wM^w$ comparing Human majority ($H^w$) with machine majority ($M^w$).\\u201d \\n\\n\\n2. It is unclear how the partitions are decided. In the experiments:in table 1 the thresholds are 0, 0.8, 1; in table 2 the threhsolds are 0.6 and 1; in table 3 are 0.6, 0.8, 1.0. It is not clear if the thresholds are experiment dependent or there is a rationale behind the threshold selection. \\n\\n In Table 1, the MNLI and SNLI datasets have exactly 5 human labels per item. Hence, you have the following scenarios \\u2013 5/5 = 1.0, 4/5 = 0.8, 3/5=0.6. Hence, we have stratified by all possible votes for the majority label. In Table 2, for SummEval, there are exactly 3 human labels. Hence, you have 3/3=1.0, 2/3=0.67, 1/3 =0.33, for median label. In Table 2, we only showed the max agreement and least agreement for brevity, we have now included all the stratification in Appendix In Table 5 and explained the stratification. The results do not change and remain the same. \\n\\n For Table 3, Each item has 3-5 human label (varying number of human labels), where majority of the items have closer to 3 labels. The main challenge with the MTBench dataset displayed in table 2, is the sample size is quite small for each model pair with multiple annotations. hence some partitions have less than 5 samples or even 0, unlike the MNLI or SNLI datasets which have a few thousand samples, Hence, we have reported the partitions with resonable number of samples, which is 100\\\\%, 60-80. We have now included all the partitions with even 1 sample in Appendix in Table 6. Our findings hold, across all partitions, in table 6 as well .\\n\\n3. The random classifier test, reported in table 1 should be better described. The table reports unique =2 or unique =3 with a percentage. Unique term is not present in the description and should be explained for the clear presentation of the experiment. If the MNLI and SNLi dataset have only a limited set of labels it should be specified in the description. \\n\\n \\n\\nThank you for your comment. We mention in RQ1 \\u201cWe also stratify samples by the number of unique human labels to ensure that our findings are consistent regardless of the stratification method.\\u201d . \\nWe have also now included the code in the Appendix A.11 to clarify how the random simulated dataset was created. We have now also referred to it in RQ1. Unique here refers to the number of unique labels that human labellers assign to a given item. Here is the code we use to generate the synthetic dataset.\\n\\n```\\nimport random\\nimport numpy as np\\nimport random\\nimport pandas as pd\\n\\ndef synthetic_random_nominal_dataset():\\n \\\"\\\"\\\"\\n Simulates a random binary dataset with 2 human labellers.\\n The 2 humans can either (a) both pick 0 (b)both pick 1 (c) one picks 0 and the other picks 1 or vice versa\\n :return:\\n \\\"\\\"\\\"\\n dataset_size = 200\\n humans_2_one_pick_0_other_picks_1 = [random.sample([0, 1], 2) for _ in range(dataset_size // 2)]\\n humans_2_both_pick_1 = [[1, 1] for _ in range(dataset_size // 4)]\\n humans_2_both_pick_0 = [[0, 0] for _ in range(dataset_size // 4)]\\n\\n human_2_annotators_binary_simulated = humans_2_one_pick_0_other_picks_1 + humans_2_both_pick_1 + humans_2_both_pick_0\\n \\n random_labeller_choice = [np.random.choice([1, 0]) for _ in range(dataset_size)]\\n \\n # Final df\\n df = pd.DataFrame(data={\\\"human_labels\\\": human_2_annotators_binary_simulated,\\n \\\"random_labeller\\\": random_labeller_choice\\n })\\n\\n return df\\n\\ndef synthetic_random_ordinal_dataset():\\n \\\"\\\"\\\"\\n Simulates a random 3 way classification 1-2-3, with 2 human labellers.\\n The 2 humans can either (a) both pick 1 (b)both pick 2. and so on (c) disagree\\n :return:\\n \\\"\\\"\\\"\\n dataset_size = 600\\n humans_disagree = [random.sample([1, 2, 3], 2) for _ in range(dataset_size // 2)]\\n humans_agree_1 = [[1, 1] for _ in range(dataset_size // 6)]\\n humans_agree_2 = [[2, 2] for _ in range(dataset_size // 6)]\\n humans_agree_3 = [[3, 3] for _ in range(dataset_size // 6)]\\n\\n human_2_annotators_ordinal_simulated = humans_disagree + humans_agree_1 + humans_agree_2 + humans_agree_3\\n\\n random_labeller_choice = [np.random.choice([1, 2, 3]) for _ in range(dataset_size)]\\n\\n df = pd.DataFrame(data={\\\"human_labels\\\": human_2_annotators_ordinal_simulated,\\n \\\"random_labeller\\\": random_labeller_choice\\n })\\n\\n return df\\n```\"}",
"{\"title\": \"Response to reviewer 2 - Q 8\", \"comment\": \"8. Is the binned JSD the best metric for the proposed experiments? Is it possible to calculate this metric, or its adaptation, for all the experiments proposed in the paper?\\n\\nReviewer 1 also has asked a similar question. Here is our explanation\\n\\nThe use of binned-JSD solves a specific problem, where a single majority label is not sufficient to represent the human preference as mentioned in section 4.2. \\n\\nThe advantage of binned-JSD can be demonstrated using a toy example (also now added to the appendix A.12 and referred in the paper in section RQ2) as follows. if only used a single gold human label (human_median in this case, could also be human_majority) to compute correlation, the acceptable values that humans have chosen is lost. As a result, metrics such as Krippendorff will see treat any value that is equidistant from the human single \\u201cgold\\u201d label as acceptably similar. For instance, assume that humans choose \\u201cdisagree\\u201d or \\u201cneutral\\u201d (median/ majority value) (selections <2,3,3>). A good model chooses ''disagree\\\" and a bad model chooses \\u201cagree\\u201d (completely different to human choice), because both \\u201cdisagree\\u201d (Likert 2) and \\u201cagree\\u201d (Likert 4) are equidistant from the median/majority value (Likert 3- Median value), K-alpha has assigned very similar scores for the model that has assigned \\u201cdisagree\\u201d (Likert 2) and the model that has assigned \\u201cagree\\u201d (Likert 4). Rank correlation metrics, in addition to their misfit in comparing item level Likert scores already discussed in the paper in section RQ2, also have a similar problem and deems the model that is poor as the better model as shown below. Our proposed approach on the other hand, assigned lower (better for JSD) scores to the better model as the \\\"good\\u201d model assigns values that the humans have chosen, compared to the poor model that has predicted a different score altogether. \\n\\n| humans | human_median | model_good | model_poor |\\n|:----------|---------------:|-------------:|-------------:|\\n| [2, 2, 3] | 2 | 3 | 1 |\\n| [1, 2, 2] | 2 | 1 | 3 |\\n| [2, 3, 3] | 3 | 2 | 4 |\\n\\nMetric results \\n| metric | model_good | model_poor | Does better model score better |\\n|----------|--------------|--------------|----------------------------------|\\n| Tau | 0 | 0.82 | False |\\n| Rho | 0 | 0.87 | False |\\n| K-alpha | -0.06 | -0.06 | False |\\n| JS_b | 0.56 | 0.65 | True |\\n\\n\\n\\nThe example above also exemplifies how unless we know apriori which is a better model, it is difficult to identify advantages / shortcomings with correlation measurements including the proposed binned-jsd. Effectiveness of metrics depend on the data. We dont know for certain if a model appears to be better/worse because of gaps in metrics, creating a chicken and egg problem in measuring the effectiveness of a metric itself. Not knowing which metric is appropriate is a common problem when it comes to correlation metrics [ Hove2018 ], including problems with Cohen\\u2019s Kappa [see Krippendorff 2004, cited over 4000 times] despite it being commonly used including many LLM as judge papers. \\n\\n Hence, our recommendation in section 4.3 \\u201cRecommendations for reporting effectiveness of automated methods\\u201d - of stratification, visualization and multi-metric reporting so we can interpret the strengths and gaps in the metrics. In particular, we suggest *\\u201c2. Multi-metric reporting: If there was no uncertainty, measures such as F1 would have worked. However, as a result of uncertainty, no single metric can capture important insights about every type of data as demonstrated in Sections 3.1, 3.2 and 3.3. Thus, we recommend reporting on multiple metrics belonging to different families, such as chance and non-chance-adjusted measures, so each metric in its own way can assist in bringing the less obvious\\u201d.* \\n\\nThrough this paper and the arguments we make, we would like to encourage the research community to take a deeper look at metrics to understand the gaps between metrics vs the reality of comparing machine with human judgements as a result of uncertainty.\"}",
"{\"summary\": \"The paper describes the analysis of different measures in evaluating LLM responses.\\nA measure, specifically a binned Jensen- Shannon divergence is proposed.\\n\\nThis measure for ordinal perception data is justified by the author since the evaluation does not need a single gold label and the human and the machine are not interchangeable. This last condition breaks a necessary condition for the Krippendorff-alpha coefficient.\\n\\nA distinction is done among Nominal values, ordinal values and continuous values. The Questions are not equally proposed for the three types of data, raising some difficulty in reading the paper.\", \"rq1\": \"How does uncertainty in human labels impact correlation metrics when\\nwe measure the efficacy of automatic evaluation methods? (Sec. 3.1)\\n\\nThe authors state that the uncertainty in human labels is high the human-machine majority labels are similar. The meaning appears to be that if there is no concordance among labeller, in this case the LLM judge is ok. It is the LLM-judge just adding noise to the labelling process?\", \"rq2\": \"How can we measure human-to-machine(HM) agreement that accounts for human uncertainty as a result of variation in human perception?\\nHuman to machine agreement is a measure of uncertainty in human perception. For this question, the comparison with different agreement percentage is tested. For this task is proposed the binned Jensed inequality.\", \"rq3\": \"How can we visualize the underlying data to draw meaningful insights when we compare the results from automatic and human evaluations?\\n\\nThe authors compare ordinal and perceptual based ratings between human and machines\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The topic is new, and since human-labelled data are difficult to retrieve, an evaluation of machine-labelled data is very interesting.\\n\\nThe tests have been done on different LLMs \\n\\nMultiple tests have been performed.\", \"weaknesses\": \"The work's presentation is not clear. The research questions help to interpret the experiments but not all the results are clear.\\n\\nSome terms, like H^W R^W (^ indicates apex), are defined in the caption of caption 1. Probably they should be defined in the text and used in the table.\\n\\nIt is unclear how the partitions are decided. In the experiments:in table 1 the thresholds are 0, 0.8, 1; in table 2 the threhsolds are 0.6 and 1; in table 3 are 0.6, 0.8, 1.0. It is not clear if the thresholds are experiment dependent or there is a rationale behind the threshold selection.\\n\\nThe random classifier test, reported in table 1 should be better described. The table reports unique =2 or unique =3 with a percentage. Unique term is not present in the description and should be explained for the clear presentation of the experiment. If the MNLI and SNLi\\ndataset have only a limited set of labels it should be specified in the description.\\n\\nIn Table 2 the terms H^mu M^mu are not specified. \\n\\n\\nIn Figure 3 are shown the human perception vs the machine labels binned by human median rating. The JS value is reported.\\nThe highest values of JS are for \\\\bar{H}=1 and \\\\bar{H}=5. Looking at the histograms, the histograms of human and machine with \\\\bar{H}=3 (and in some measure \\\\bar{H}=4) are very similar, but the JS values are sensibly lower.\\nThe authors state that humans tend to be more certain when they assign extreme rating and the machine rarely provide extreme values. \\nCould the explanation of the experiment take into account also this aspect? If there is a different interpretation of this discrepancy ( similar histograms but lower JS value) it would be useful to make this point more evident.\", \"questions\": \"Could the author provide some detail in the selection of different thresholds across the tables? It would be useful to clarify whether these thresholds are dataset-specific or if there's a general principle behind the thresholds selection.\\n\\nIn the paper, different metrics are used and the human and machine labels are compared with average, with median or with majority labels. Are all these comparisons needed? Do they capture multiple aspects of the outputs?\\n\\nIs the binned JSD the best metric for the proposed experiments? Is it possible to calculate this metric, or its adaptation, for all the experiments proposed in the paper?\\n\\nThe bins are used to mimic human perception. Beyond the aggregation of perception, can they capture variation in human perception?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer 1 - question 2 - 4\", \"comment\": \"2. The type of human annotation is limited. Although the paper presented very well the type of collected human annotations, I limited these in their evaluations. Given the space and the breed of the paper, I suggest adding a few more datasets (Please check paper [1] for some guidance on what dataset to choose.)\\n\\nWe have now included 3 additional datasets (Topic chat, DICES and QAGS) used in paper[1]) as a case study in the Appendix (section A.11) of our paper and referred in the main body in section 4.3. This is in addition to existing 6 datasets (SNLI, MNLI-matched, MNLI-mismatched, SummEval, Mt-bench and synthetic datasets ) on several models \\u2013 Mistral, Sonnet, LLama and GPT-4. Our findings hold, and we summarize the findings here for the new 3 datasets.\\n\\n- **Effects of Multi-metric reporting:** On the Topical Chat (TC) dataset, for understandable criteria the aggregate number (Krippendorff-$\\\\alpha$ -0.01) of $H^wM^w$ score of -0.01 *superficially seems to imply* that HM correlation is low as shown in Table6. However, percentage agreement (score 0.97) and Randolph-$\\\\kappa$ (score 0.93) score quite highly, indicating that class imbalance has substantially lowered Krippendorff-$\\\\alpha$ pretty close to 0.0. Also note that over 96\\\\% of the samples have perfect human agreement, however the overall HH Krippendorff-$\\\\alpha$ is quite low scoring -0.01. This effect of how various chance adjusted metrics impact correlation scores is also discussed in detail in section 4.3.\\n\\n - **Impact of stratification** When we compare the overall performance (column All in Table-6 on dataset TC (criteria understandable) with dataset QAGS, Randolph-$\\\\kappa$ drops substantially by 19 points (0.93 $\\\\rightarrow$ 0.74). However, QAGS dataset has around 66\\\\% of the samples that have perfect human agreement, while TC has 96\\\\% of the sample with human agreement. When we compare the samples with perfect human agreement between the 2 datasets, the model performance gap reduces to just 6 points (0.93 $\\\\rightarrow$ 0.87).The model seems to struggle with DICES dataset (crowdsourced with over 100 annotations per item, with no perfect agreement items) across all metrics and stratification groups, indicating much deeper investigation is required. The general trend of where in a stratified group with higher proportion of noisy or uncertain samples (as measured by low HH correlation), the HM correlation seems to outperform HH correlation in Table6 applies as previously discussed in Section RQ1. \\n\\n3. The study considered four datasets with variant tasks and annotations that have sufficient human annotators. I think the number of annotations was well considered, but the number of the dataset could be improved, and metal analysis could have been better presented instead of large tables per dataset.\\n\\nWe have increased the datasets as suggested. We acknowledge that the information is complex to synthesize as the table present results stratified by different groups and metrics. This was the simplest way with our best efforts. Happy to incorporate any specific suggestions you have. \\n\\n\\n4. How come the metric does not need a single value to approximate human labels but relies on a single \\\"human and machine labels are not treated interchangeably, as the items in a given bin are selected by the human median or majority value\\\"? This seems contradictory to me.\\n\\nBinned-jsd considers the variation of the human labels. The bin an item belongs is assigned by median or majority value and hence is only used to assign the bin numbers. In the example below, we compare probability distribution of the human labels in bin 2 [2,2,3,1,2,2] with corresponding machine labels model [3,3,1,1]. We repeat the same for each bin, in this example bin 3 as well, and do a weighted sum of JSD for each bin, so that the binned-JSD always ranges between [0, 1]. \\n\\n| humans | human_median (bin) | model |\\n|:----------|---------------:|-------------:|\\n| [2, 2, 3] | 2 | [3,3] |\\n| [1, 2, 2] | 2 | [1,1] |\\n| [2, 3, 3] | 3 | [2,1] |\"}",
"{\"comment\": \"The update was not saved, it should be ok now.\"}",
"{\"summary\": \"The paper discusses the current landscape of using LLMs as judges for various tasks and presents compelling arguments for why existing correlation metrics might not take into account variations and uncertainty in human judgment.\\n\\nAfter the author's reply (which includes more datasets and clarifications about the metric), I believe that the paper makes good contributions and good cases of showing how to be careful when using LLMs as judges. I there recommend to accept the paper!\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1) The paper is well structured and presented and is very clear and easy to follow and read.\\n2) The paper clearly shows the issue related to relying on a high correlation between human and machine-generated outputs, giving cases where the uncertainty in human annotations is high; these correlations seem to be high, but that could also be the case even when the labeling is random. \\n3) The study proposed a metric, namely binned JSD, to account for variations and uncertainty in human judgment.\", \"weaknesses\": \"1) It is not clear to me how this new metric handles the issues raised with traditional metrics. Could the author clarify and show cases of how JSD improves the analysis of LLMs as judges when compared to human judgment? It seems like a promising direction, but I am not convinced due to the limited number of datasets and support of the authors' claim.\\n\\n2) The type of human annotation is limited. Although the paper presented very well the type of collected human annotations, I limited these in their evaluations. Given the space and the breed of the paper, I suggest adding a few more datasets (Please check paper [1] for some guidance on what dataset to choose.)\\n\\n3) The study considered four datasets with variant tasks and annotations that have sufficient human annotators. I think the number of annotations was well considered, but the number of the dataset could be improved, and metal analysis could have been better presented instead of large tables per dataset. \\n\\n\\n[1] Bavaresco, Anna, et al. \\\"Llms instead of human judges? a large scale empirical study across 20 nlp evaluation tasks.\\\" arXiv preprint arXiv:2406.18403 (2024).\", \"questions\": \"How come the metric does not need a single value to approximate human labels but relies on a single \\\"human and machine labels are not treated interchangeably, as the items in a given bin are selected by the human median or majority value\\\"? This seems contradictory to me.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Reviewer 2 APGo Question\", \"comment\": \"Dear reviewer,\\nFollowing up to check, since you mentioned that you have adjusted the review, the scores have not been updated. When you get a chance, it would be great if you could confirm - Thank you :-)\"}"
]
} |
E8TPUAimyJ | Context-Scaling versus Task-Scaling in In-Context Learning | [
"Amirhesam Abedsoltan",
"Adityanarayanan Radhakrishnan",
"Jingfeng Wu",
"Mikhail Belkin"
] | Transformers exhibit In-Context Learning (ICL), a phenomenon in which these models solve new tasks by using examples in the prompt without additional training. In our work, we analyze two key components of ICL: (1) context-scaling, where model performance improves as the number of in-context examples increases and (2) task-scaling, where model performance improves as the number of pre-training tasks increases. While transformers are capable of both context-scaling and task-scaling, we empirically show that standard Multi-Layer Perceptrons (MLPs) with vectorized input are only capable of task-scaling. To understand how transformers are capable of context-scaling, we first propose a significantly simplified transformer that performs ICL comparably to the original GPT-2 model in statistical learning tasks (e.g., linear regression, teacher-student settings). By analyzing a single layer of our proposed model, we identify classes of feature maps that enable context scaling. Theoretically, these feature maps can implement the Hilbert estimate, a model that is provably consistent for context-scaling. We then show that using the output of the Hilbert estimate along with vectorized input empirically enables both context-scaling and task-scaling with MLPs. Overall, our findings provide insights into the fundamental mechanisms of how transformers are able to learn in context. | [
"in-context learning",
"kernel smoothers",
"Hilbert estimate"
] | Reject | https://openreview.net/pdf?id=E8TPUAimyJ | https://openreview.net/forum?id=E8TPUAimyJ | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"uyuSW8VQjf",
"r6Y1ToVj1l",
"qxwaP2IJDD",
"qshObuBm1O",
"q7A6JI98F5",
"pf6Zc4Sh2B",
"o2tGRCcW86",
"jtFeurRCr0",
"iQN9KvrP2H",
"blEMncHmGA",
"bAnGyIS0SU",
"YLW9FKVjgR",
"XfqhxAN8bN",
"Us8y7AwRG4",
"TEBUHcldZ3",
"NVbTn2TI2M",
"IomsccMZsA",
"IftiqJbTPj",
"I6CRKShrl4",
"GogvI688za",
"DbCRZqy6Vq",
"BwUz634XMS",
"8EgK7s9JQz",
"3WKzgzE8OT"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1731968847869,
1732652137873,
1731968557480,
1730113299487,
1734890212859,
1731968466651,
1730528794386,
1732589041764,
1732753433463,
1733202500197,
1732624838418,
1733110585855,
1737524151384,
1730223085434,
1730711751816,
1732754793116,
1733259105819,
1732640717705,
1731968328071,
1732226000102,
1733204375549,
1733187697771,
1731968901147,
1733084655705
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_thSw"
],
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_DfAQ"
],
[
"ICLR.cc/2025/Conference/Submission11868/Area_Chair_63Ze"
],
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_2MBm"
],
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_DfAQ"
],
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_NaCy"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_thSw"
],
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_NaCy"
],
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_NaCy"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_thSw"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_NaCy"
],
[
"ICLR.cc/2025/Conference/Submission11868/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11868/Reviewer_NaCy"
]
],
"structured_content_str": [
"{\"comment\": \"We thank the reviewer for the comments. We will address the concerns as following,\\n\\n> **Reviewer comment:** On feature maps for context-scaling\\n>a) If you construct a Kernel based mapping of the form in equation 11, ... Thus the experiments you conduct in Figure 5 are not really interesting. .... Would you elaborate, what is really the important and new insight one should take from Figure 5?\\n\\n**Our response :** The point of Figure 5 is to set up the question of context scaling. While it is true that Hilbert kernels can context scale, it is not a priori clear that SGPT can context-scale since SGPT is not a kernel smoother. Furthermore, prior to our work, it was not clear what part of context-scaling was due to the attention alone and what part was due to key, query, and value weights. \\n\\n> **Reviewer comment:** b) In the first part of Section 5.1, the authors construct a mapping which is based on linear attention and show the equivalence to one step of gradient descent. This result is already known from previous works such as Von Oswald's work. Can the authors explain what is truly new in Section 5.1?\\n\\n**Our response :** In Section 5.1, we show that attention can implement the Hilbert estimate, which is statistically consistent for any statistical in context learning task. In contrast one step of gradient descent is not generally statistically consistent for statistical in context learning tasks. Thus, implementing 1-step of gradient descent is not a major point. We simply show that it can be obtained from our framework as a special case (note that we cite Oswald et al. (2023)). \\n\\n> **Reviewer comment:** On the results with simplified transformer: As i stated in the strengths section, I found the result with simplified transformer intriguing. ... What would have happened if the authors only used linear attention and no normalization? While the model is close to standard GPT in terms of performance, how much of this is due to learning that happens due to the MLPs at different depths? Put differently, can the authors show the impact of depth on the performance, i.e., as depth increases model learns more complex hierarchy of features that allow it to match GPT? If that is the case, then this should be clearly stated in the paper too, that depth was crucial to match the performance of GPT at short context lengths. In some sense, if depth is crucial to match the performance, then the role of kernel smoothing alone is not a crucial one.\\n\\n\\n**Our response :** \\n1. Role of L1 Normalization: We found the l1 normalization to be helpful in avoiding numerical instability when training multi-layer SGPT. \\n2. Impact of Depth on Performance: In all of our results, we used the same depth for SGPT as GPT2. See Appendix A for details on how we selected depth for these models. \\n\\n> **Reviewer comment:** On Section 5.2: In this section, the authors study various variants of MLPs to study context scaling and task scaling capabilities. I find somethings unclear here.\\n> a) Firstly, I find the fact that vectorized MLPs with sufficient capacity not able to context scale not clear. ... So my question to the authors is \\\"If MLP has sufficient capacity, what stops it from implementing a ridge regression solution at different context lengths up to the maximum context length determined by size of vectorized input?\\\" Basically, if the MLP is not able to learn, then it has to be an argument that is not explained by expressivity but by learnability. Perhaps the optimization cannot find the global minimum in the above case easily?\\n\\n**Our response :** Our work is unrelated to expressivity arguments and instead, focuses on learnability. We show that MLPs, on their own, do not learn a solution that generalizes to large context length even though in theory they are capable of doing so. In case we misunderstood your question, we would appreciate a clarification. \\n\\n> **Reviewer comment:** b) Secondly, the authors show that MLPs with features from kernel smoothes can context scale. I don't quite get what's the surprise here. Isn't the input feature to these MLPs itself guaranteed to be consistent in infinite context limit?\\n\\n**Our response :** While kernel smoothers can context-scale, they cannot task-scale. MLPs with the combination of raw data and kernel features can perform both.\", \"title\": \"Official Comment by Authors (Part 1)\"}",
"{\"comment\": \"Thank you for your response. Unfortunately, I do not think they address my fundamental concerns about the novelty of this work.\\n\\n> A novel insight of our work beyond that of von Oswald et al. (2023) is that SGPT improves over both standard kernel smoothers and 1-step of GD. Indeed, one-layer of SGPT is better than both of these previous algorithms as it combines an MLP and kernel smoothing to simultaneously task and context scale. Empirically, we demonstrate in Fig. 7 that combining MLP with kernel smoothing features significantly outperforms 1-step of GD and kernel smoother on both linear regression and nonlinear tasks. This contrasts with von Oswald et al. (2023), who focus on demonstrating the equivalence of a one-layer transformer to a single step of GD (or kernel smoothing in their Appendix A.1). \\n\\nI think this is a neat finding. However, it seems that your theoretical construction cannot account for this, correct? I think this could be a good motivation better understanding how exactly this MLP overcomes this prior theoretical constructs. However, without such deeper analysis, I do not think it is a sufficient result.\\n\\n> A fixed context length is part of the assumptions in the previous theoretical results (Von Oswald et al. (2023), Ahn et al. (2023), Zhang et al. (2024a), Mahankali et al. (2024), Zhang et al. (2024b)). In particular, Proposition 1 in Von Oswald et al. (2023) states that for any given pairs of context examples, there exists a transformer such that its output is identical to the one-step GD output over the $N$ pairs of context examples. Therefore, Proposition 1 cannot be used if we test the same transformer with a different number of context examples.\\n\\n> The only work we are aware of for analyzing varying context-length is Theorem 5.3 in Wu et al. (2024), which allows testing a pre-trained attention model with a varying context length. However, their results are only tight when the context length is close to the one used in training. \\n\\nIsn't your construction here the exact same used in von Oswald et al., except without the scaled projection matrix though? Please clarify if I'm misunderstanding.\"}",
"{\"comment\": \"We thank the reviewer for the comments. We will address the concerns as following,\\n\\n> **Reviewer comment:** The discussion on context-scaling in MLPs appears to be drawing from prior work .... Were you able to look at classification tasks as well? Further, it looks like Tong and Pehlevan made the choice of plotting excess MSE above Bayes optimal rather than raw MSE. Because Bayes optimal MSE falls as context length increases, zero excess MSE would imply context-scaling. Looking at Figure 6b, for fewer dimensions than the d=8 you tested, it does appear that MLPs adhere to the Bayes optimal level before failing at longer contexts.\\n\\n**Our response :** Per the reviewer\\u2019s suggestion, we added an additional experiment showing that MLPs cannot context scale even for classification. Below, we consider a classification problem where the label is given by the \\u201csign\\u201d of the output of a linear regression model - if the sign is positive, the label is 0 and if it is negative, the label is 1. We find MLP performance first improves and then gets worse with increasing context length indicating a failure to context scale. \\n\\n| Context Length | 5 | 10 | 20 | 30 | 40 | 60 | 80 | 100 | 120 |\\n|---------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------|\\n| 100k Pre-training | 66.61% | 68.68% | 69.21% | 69.35% | 68.43% | 70.16% | 70.14% | 69.84% | 69.69% |\\n| 1M Pre-training | 69.9% | 74.25% | 76.49% | 77.66% | 78.08% | 77.07% | 77.23% | 75.39% | 74.92% |\\n\\nFurthermore, Fig. 1d of Tong & Pehlevan does not provide information about context scaling as it is measuring excess risk, which refers to the difference between the model\\u2019s MSE and that of optimal ridge regression. optimal ridge regression varies with context length and thus, it is possible for excess risk to increase even though MSE decreases. In their figure, even transformers have higher excess risk (error bars indicate performance approaches that of random prediction) as context length increases even though we know these models do context-scale.\\n\\n> **Reviewer comment:** Overall, it would appear that context-scaling in MLPs does happen, but is bottlenecked by some aspect of insufficient data and long inputs, ... Indeed, it looks like in your Figure 7B, you do demonstrate some context-scaling in unmodified MLPs for 5M-pretraining tasks (top row), quite similar to using \\\\psi_L features.\\n\\n**Our response :** Our claim is that standard MLPs trained on vectorized data do not context scale. Note that by context-scaling, we refer to a setting in which the number of pretraining tasks is fixed, and we expect performance to improve as context examples increase \\u2013 rather than initially improve and then degrade at a certain point. Even for the 5M pre-training case referenced by the reviewer, increasing the context length results in worse MLP performance, as we show below. Note that performance continually improves for \\\\psi_L, \\\\psi_H features while MLP performance first improves and then worsens as context length increases:\\n\\n\\n| **Context Length** | **5** | **10** | **20** | **30** | **40** | **60** | **80** | **100** | **120** | **160** |\\n|---------------------|---------|---------|---------|----------|----------|----------|---------|----------|----------|----------|\\n| **\\u03c8_L** | 0.73 | 0.54 | 0.39 | 0.305 | 0.25 | 0.199 | 0.164 | 0.147 | 0.138 | 0.134 |\\n| **\\u03c8_H** | 1.11 | 0.816 | 0.597 | 0.497 | 0.431 | 0.371 | 0.332 | 0.313 | 0.282 | 0.275 |\\n| **Vectorized** | 0.67 | 0.51 | 0.384 | 0.348 | 0.325 | 0.3 | 0.287 | 0.293 | 0.305 | 0.335 |\\n\\n> **Reviewer comment:** Additionally, I thought that kernel smoothers are weak in high dimensions, ... It's quite possible I misunderstand this aspect of your analysis, but it seems implausible that a kernel smoother interpretation of attention is applicable to real-world Transformers?\\n\\n**Our response :** The method we are proposing (one-layer SGPT) is not a kernel smoother, it is the combination of kernel smoother and MLP. We further show in Figure 7, that the combination has better sample complexity than kernel smoother (or one-step of GD) alone.\\n\\n> **Reviewer comment:** Finally, you mention there is a theoretical connection ... but I couldn't find the derivation in your manuscript. \\u2026 could you point me to its location?\\n\\n**Our response :** The derivation is in Appendix B.\\n\\n\\nWe hope our responses and additional experiments address your concerns. If any points remain unclear, we\\u2019d be happy to clarify. If the manuscript now aligns with your expectations, we\\u2019d appreciate your consideration of an updated score.\"}",
"{\"summary\": \"In recent years, there has been a growing interest in the community to understand and build better in-context learning capabilities. In this work, the authors study in-context learning through the lens of two separate abilities: context scaling and task scaling. Context scaling refers to the capability to improve predictions as the number of in-context examples grow while keeping the number of tasks fixed. Task scaling refers to the capability to improve predictions as the number of task grows while keeping the number of in-context examples fixed. The authors show that a lot of in-context learning capabilities arise from the behavior of attention as a kernel smoother. To show this, the authors consider a simplified version of the GPT architecture, which they call SGPT, where the attention blocks are not trained and key, query, and value matrices are set to identity. The authors compare GPT-2 like architecture with SGPT and show that on synthetic in-context learning tasks from the literature, the performance of the two models match. Further, the authors study context scaling and task scaling in MLPs. First, they show that MLPs with vectorized inputs are capable of task scaling and not capable of context scaling. They show that MLPs with featurized inputs are capable of context scaling but not of task scaling. Finally, they combine the two types of inputs to show MLPs exhibit both context scaling and task scaling.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. I like the lens that the authors use in the paper, i.e., separating in-context learning into task scaling and context scaling. Even though this lens is not completely new, its a useful one.\\n\\n2. The authors propose a simplification of GPT which is referred to as SGPT. This simplification is a useful one from two perspectives -- it gives way to a model that can be understood theoretically and it possibly indicates that we could simplify the architectures for in-context learning. \\n\\n3. Overall, the paper is easy to read and reasonably well presented.\", \"weaknesses\": \"I have several concerns with the paper that I highlight below.\\n\\n1. **On feature maps for context-scaling:** \\na) If you construct a Kernel based mapping of the form in equation 11, then of course due to sheer consistency argument one can say that the map in equation 12 converges to the true function. This would require infinite examples in-context. The whole point of in-context learning is to learn in-context as quickly as possible. From an asymptotics point of view, all the consistent estimators in equation 4 can learn the function. Thus the experiments you conduct in Figure 5 are not really interesting. If you took the model in equation 4 itself, then so long as the Kernel you use gives a consistent estimate, one could argue that as sequence length n increases in equation 4 the performance will improve and this model will context scale. Would you elaborate, what is really the important and new insight one should take from Figure 5?\\n\\n b) In the first part of Section 5.1, the authors construct a mapping which is based on linear attention and show the equivalence to one step of gradient descent. This result is already known from previous works such as Von Oswald's work. Can the authors explain what is truly new in Section 5.1? \\n\\n \\n2. **On the results with simplified transformer:** As i stated in the strengths section, I found the result with simplified transformer intriguing. However, there are few important ablations that would have helped understand this section better. First of all, the authors change the attention to a linear attention l1 normalization operation. What would have happened if the authors only used linear attention and no normalization? While the model is close to standard GPT in terms of performance, how much of this is due to learning that happens due to the MLPs at different depths? Put differently, can the authors show the impact of depth on the performance, i.e., as depth increases model learns more complex hierarchy of features that allow it to match GPT? If that is the case, then this should be clearly stated in the paper too, that depth was crucial to match the performance of GPT at short context lengths. In some sense, if depth is crucial to match the performance, then the role of kernel smoothing alone is not a crucial one. \\n\\n3. **On Section 5.2**: In this section, the authors study various variants of MLPs to study context scaling and task scaling capabilities. I find somethings unclear here. \\n\\n\\n a) Firstly, I find the fact that vectorized MLPs with sufficient capacity not able to context scale not clear. The training data takes the form P, x, where P is the prompt that contains (x,y) pairs from the task of interest, x is the current query. In the case, where the model class is unrestricted, then the learner should ideally learn E[y| P,x], where y is the true label and expectation is over the distribution of prompts and query, label pairs. If the task of interest is linear regression (studied in Garg et al.), the coefficients of the regression are drawn from an isotropic Gaussian, and the features are drawn from an isotropic Gaussian as well, then \\n E[y| P,x] = ((X.t X + sigma I)^{-1} X.t Y).t x, where sigma is noise. This is solution to standard ridge regression. Provided the model class be it transformer or be it MLP is sufficently expressive and contains ((X.t X + sigma I)^{-1} X.t Y).t x, then in principle MLP should also be capable of context scaling. So my question to the authors is \\\"If MLP has sufficient capacity, what stops it from implementing a ridge regression solution at different context lengths up to the maximum context length determined by size of vectorized input?\\\" Basically, if the MLP is not able to learn, then it has to be an argument that is not explained by expressivity but by learnability. Perhaps the optimization cannot find the global minimum in the above case easily? \\n\\n b) Secondly, the authors show that MLPs with features from kernel smoothes can context scale. I don't quite get what's the surprise here. Isn't the input feature to these MLPs itself guaranteed to be consistent in infinite context limit? \\n\\n\\n4. **On the role of context-length:** The authors state that existing works fail to explain the context scaling capabilities of transformers. In the example, I gave in 3a), the Bayes optimal predictor is implemented by the transformer. This Bayes optimal predictor of course improves with context length. In this sense, I don't quite get why do the authors say that existing works fail to explain context scaling capabilities. \\n\\n5. **On MLPs and context scaling**: The authors also mention that it is unclear whether MLPs scale with context and they are the first to dive into this. Perhaps the authors have missed https://arxiv.org/pdf/2311.18194.\\n\\nOverall, I don't feel I learned something insightful from the paper, the only thing I liked was the experiment with SGPT.\", \"questions\": \"In the weakness section, I provide both the concerns and the questions for the authors.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper explores task and context scaling in in-context learning. The authors propose simplified models like SGPT and make theoretical connections to kernel smoothers. While the ICL topic is very timely and relevant, the reviewers have concerns regarding the novelty and depth of insights. The claims regarding SGPT\\u2019s advancements over prior works are not convincingly justified, and some of the findings seem incremental. Reviewers found certain theoretical contributions to be unclear and there was also a concern on insufficient experimentation to substantiate claims. For example, SGPT matching GPT-2's performance lacks thorough ablations. Furthermore, the analysis/discussion of MLPs raises valid questions that remain not fully unresolved. Given the limited novelty and incomplete discussions, I recommend rejecting this submission in its current form.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, reviewers highlighted concerns about the lack of clarity and novelty in contributions. Despite authors' responses, the explanations failed to address key issues, such as the theoretical novelty of SGPT and the ambiguous role of attention versus MLPs in in-context learning. Reviewers like thSw found the explanations for theoretical extensions over von Oswald et al. insufficient. Here, I also agree that kernel smoothing viewpoint is likely not novel and discussed by prior art (at least implicitly) including by von Oswald et al., Collins et al., Chen et al. and Nguyen et al. (FourierFormer, NeurIPS'22, not cited). Similarly, Reviewes NaCy and DfAQ remained unconvinced by the experimental rigor and theoretical integration. In summary, while the paper makes insightful points and has good potential, it currently needs further revision and improvement for acceptance.\"}",
"{\"comment\": \"We thank the reviewer for the positive review. We will address the questions as following,\\n\\n> **Reviewer comment:** \\n> What is the major intuition of taking key, query, and value matrices to be identity? Such intuition is vital since it is shown that the context-scaling capability is attributed to the attention, and task-scaling is to the MLP with vectorized data. I wonder if key, query, and value matrices are learnable, will they also provide sufficient task-scaling ability?\\n\\n**Our response :** The main reason for setting these matrices to be the identity is that it drastically simplifies the model while still being competitive with GPT-2. Furthermore, the fact that our model can both context and task-scale shows that key, query, value weight matrices are not necessary for models to exhibit these properties. \\n\\n> **Reviewer comment:** What is the theoretical or explanatory justification for the capability of the task-scaling ability of transformer and MLP, generalization?\\n\\n**Our response :** In the case of a fixed context length, the input to the MLP is a combination of the raw data and an estimation of the label. Consequently, the MLP can be viewed as acting like a comparison model across tasks\\u2014somewhat analogous to a weighted k-nearest neighbor model. However, developing a rigorous theoretical foundation for this capability is beyond the scope of the current paper.\\n\\n> **Reviewer comment:** What is the task-scaling performance for single-layer SGPT (a task-scaling counterpart of Fig 5)?\\n\\n**Our response :** Here\\u2019s the performance of 1-layer SGPT:\\n\\n### Task: Linear Regression\\n\\n| # Pretraining Tasks | 1k | 10k | 100k | 1M | 5M |\\n|----------------------|-------|-------|-------|-------|-------|\\n| Context Length 10 | 0.545 | 0.545 | 0.540 | 0.476 | 0.470 |\\n| Context Length 40 | 0.292 | 0.290 | 0.285 | 0.217 | 0.210 |\\n\\n### Task: 2-Layer Neural Network\\n\\n| # Pretraining Tasks | 1k | 10k | 100k | 1M | 5M |\\n|----------------------|-------|-------|-------|-------|-------|\\n| Context Length 10 | 0.864 | 0.860 | 0.846 | 0.841 | 0.830 |\\n| Context Length 40 | 0.683 | 0.675 | 0.670 | 0.663 | 0.650 |\\n\\n\\n\\n> **Reviewer comment:** Could you explain more on the right panel of Fig 3?\\n\\n**Our response :** This plot zooms in on a fixed context length of 30 and shows that for both noise levels, the performance is comparable to ridge regression corresponding to each noise level. The black dots represent different ridge values, with the x-axis showing the MSE for the first noise level and the y-axis showing the MSE for the second noise level. If we were to choose a fixed ridge value, it\\u2019s clear that each noise level would require a different value, and no single value would work well for both noise levels. This plot demonstrates that both GPT-2 (also previously shown in Bai et al. (2023)) and SGPT (ours) can perform well across both noise levels as good as ridge regression. We will add this explanation to the Appendix.\\n\\n\\n\\n> **Reviewer comment:** Could you explain more on the Fig 4(B), 2-layer NN, SGD?\\n\\n**Our response :** This is the baseline previously used in Garg et al. (2023), where a similar 2-layer neural network architecture with new random initialization is considered and fine-tuned on the context data points. We will add this explanation to the Appendix.\\n\\nThank you again for your positive feedback and thoughtful evaluation. We appreciate your support and are happy to address any additional points if needed.\"}",
"{\"summary\": \"This paper studies context-scaling and task-scaling of ICL, under multiple ICL regression tasks, including linear regression with fixed and multi noise level, two-layer ReLU neural networks, decision trees, and sparse linear regression. Experiments are conducted on GPT2 and Simplified GPT (SGPT) by taking key, query and value matrices to be identify matrices and removing batch normalization. Under those tasks, GPT2 and SGPT both demonstrate context-scaling ability and the performances of GPT2 and SGPT are very close to each other. SGPT allows a kernel smoothing perspective interpretation of transformer's ICL capability. Specifically, the authors demonstrate the capability of ICL capability of transformer by showing a single layer SGPT can perform kernel smoothing (including the consistent Hilbert estimate as special case) with appropriate feature map corresponding to the attention. To see what statistics are the essence of task-scaling and context-scaling, experiments on MLP with different inputs , such as vectorized input data with or without kernelized features are conducted. Specifically, task-scaling attributes to the vectorized data and context scaling attributes to the kernelized features, and combining both inputs provide both task-scaling and context scaling.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The SGPT considered and its experiments are novel. And it is quite surprising and interesting to see that its performance is comparable to GPT2. I suppose it is due to the simplicity of the ICL tasks conducted in the paper.\\n\\nThe idea of connecting ICL and kernel smoothly is clearly presented and is of insight. \\n\\nThe separation of context-scaling and task-scaling via feature from kernel estimate and vectorized input is novel and can potentially help us understand their impacts better individually.\", \"weaknesses\": \"Though with the consistency of Hilbert estimate, how exactly the transformer performs Hilbert estimate i.e., via the construction of activation function in attention, is not straightforward.\\n\\nWhat is the major intuition of taking key, query, and value matrices to be identity? Such intuition is vital since it is shown that the context-scaling capability is attributed to the attention, and task-scaling is to the MLP with vectorized data. I wonder if key, query, and value matrices are learnable, will they also provide sufficient task-scaling ability? \\n\\nWhat is the theoretical or explanatory justification for the capability of the task-scaling ability of transformer and MLP, generalization?\", \"questions\": \"What is the task-scaling performance for single-layer SGPT (a task-scaling counterpart of Fig 5)?\\n\\nCould you explain more on the right panel of Fig 3? \\n\\nCould you explain more on the Fig 4(B), 2-layer NN, SGD?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"> **Reviewer comment:** Not meaning to split hairs, but your setup sounds more like logistic regression than classification. By classification, I had more in mind a setup with multiple clusters defined in-context, as in Reddy 2024, which Tong and Pehlevan seem to show does convincingly context scale, even by your definition. Do you have an idea why this might be the case?\\n\\n\\n**Our response :** We would like to clarify that our paper is not primarily focused on classification; rather, we included a classification task as an example to address your concerns. Furthermore, regarding Tong and Pehlevan, could you please specify which part of their paper you are referring to? In Figure 1, they vary the context length for a classification task, but we do not find any evidence of context scaling in their results.\\n\\n> **Reviewer comment:** ... At least in this fairly large window, the MLPs performance improves substantially with longer context. That the MLPs' performance subsequently declines seems to be an independent issue. It's certainly up to you how you define the notion of \\\"context scaling\\\" exactly, but it seems somewhat disingenuous to claim that the MLP does not context scale when its performance improves substantially with longer contexts.\\n\\n\\n**Our response :** We would like to point out that Figure 7 includes four experiments. In three of them, the MLP clearly does not exhibit task-scaling behavior at all. In the specific case you mention (top-right panel), while there is an initial improvement with increasing context length, the performance does not continue to improve with further increase of in-context length, in contrast to attention based features \\\\psi_l and \\\\psi_H that provides consistent improvement with increased context length.\\n\\n\\n> **Reviewer comment:** ... there doesn't seem to be any derivation regarding the Hilbert estimate and Transformers in Appendix B...\\n\\n**Our response :** Please see lines 756-773. This derivation holds for any choice of kernel including the Hilbert kernel. Please let us know if you would like any further clarification.\"}",
"{\"comment\": \"Thank you for response.\\n\\n> **Reviewer comment:** This is certainly true, so perhaps your claim should be adjusted to \\\"MLPs sometimes context scale, and sometimes do not context scale,\\\" or even \\\"MLPs context scale for small context lengths given sufficient data\\\"? To claim that \\\"MLPs do not context scale\\\" seems incorrect (or at least, overly coarse) in light of this evidence.\\n\\n**Our response :** When we state that a model \\\"context scales,\\\" we mean it consistently improves as more context information is provided. Consistency is crucial because, with additional context data, one naturally expects a model to perform better. We will clarify this in the final version of the paper. Specifically, we emphasize that while MLPs may initially show improvements with added context, they do not consistently scale with context length. \\n\\n> **Reviewer comment:** Perhaps my confusion is the following. It seems like there are two possible interpretations to your claim about the Hilbert estimate:\\n>1. A Transformer's self-attention can implement the Hilbert estimate.\\n>2. A Transformer's self-attention matrix can be interpreted as a kernel smoothing operator. One such kernel is the Hilbert estimate.\\n\\n>I interpreted your claim to be (1), but it sounds like you actually intend (2)? If the latter, I'm unsure how Hilbert estimates fit into your overall argument, and why you included them? Why not stop at the 1-step GD kernel (in eq 10), which has a natural connection to the attention matrix structure? Would a Transformer implement anything that looks like a Hilbert estimate in practice?\\n\\n**Our response :** Both interpretations (1) and (2) are correct, and we appreciate the opportunity to clarify. As mentioned in line 431, if the kernel is chosen to be the exponential kernel, then \\\\psi_K implements the soft-max attention head, which aligns with the original GPT-based attention head. However, we highlight the Hilbert kernel because the exponential kernel smoother is not a consistent predictor\\u2014it does not converge to the optimal solution. This distinction is why we emphasize the Hilbert kernel in our argument, as it provides a more robust theoretical foundation for understanding self-attention's capabilities and gives consistency guarantee for any statistical task.\"}",
"{\"comment\": \"We sincerely thank the reviewer for engaging in an extended discussion. This provides us with an excellent opportunity to clarify our work.\\n\\n**Regarding context scaling in MLPs:** We will explicitly clarify this point in the final manuscript that while MLPs may initially exhibit improvements with added context, they do not consistently improve with increasing context length.\\n\\n**On the topic of the Hilbert kernel and its relationship to softmax attention:** Our theoretical results aim to illustrate that standard softmax attention is a specific instance of a more general and powerful algorithm\\u2014kernel smoothing. This perspective reveals that kernel smoothing with Hilbert kernel can be consistent even without requiring additional learned parameters. By framing softmax attention within this broader kernel-based perspective, we hope to inspire the development of more general attention mechanisms in future works.\"}",
"{\"comment\": \"I thank the authors for their responses. I appreciate the clarifications. However, even after reading your responses, I do not think there was some important insights that I take away from this work. I would appreciate if the authors try to sharpen the message of the paper, do some better experimentation to draw more insightful conclusions. For instance, the experiment with SGPT involves a multi-layer architecture. Even though query, key, and value matrices are set to identity, MLP matrices are not and they should play a role in improving in-context learning too. Hence, it is not clear how much does attention structure only contribute to in-context learning.\"}",
"{\"comment\": \"> **Reviewer comment:** This is fine, and makes for an interesting take, but should be clarified in your manuscript...\\n\\n**Our response :** We are happy to clarify this point in the final manuscript. We will emphasize that while MLPs may initially show improvements with added context, they do not consistently scale with context length.\\n\\n> **Reviewer comment:** My apologies, I still don\\u2019t understand. If (1) is correct -- that is, a Transformer (in the original sense, with softmax attention) can implement the Hilbert estimate -- how do you show this? Is there a derivation somewhere in the manuscript? You appear to claim that softmax attention implements the exponential kernel, which you state is not a consistent predictor (whereas a Hilbert estimate is)?\\n\\n**Our response:** To clarify, softmax attention does not implement the Hilbert estimate. Instead, softmax attention implements a kernel smoother with the exponential kernel. Indeed, this has been shown in prior work [see, e.g., Transformer Dissection: A Unified Understanding of Transformer\\u2019s Attention via the Lens of Kernel](https://arxiv.org/abs/1908.11775).\\n\\n> **Reviewer comment:** Do you mean to imply that a different choice of attention nonlinearity will implement a Hilbert estimate instead?\\n\\n**Our response :** Yes, provided the data is on the unit sphere in d dimensions, attention can implement the Hilbert estimate simply by changing the nonlinearity. For data not on the unit sphere(general case), we implemented the Hilbert estimate directly by replacing attention.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This study examines task versus context scaling in various in-context regression tasks, finding that Transformers (but not MLPs) can scale with increasing context sizes, as well as tasks. The authors suggest that a Transformer's ability to context-scale stems from its ability to implement kernel smoothers in the attention matrix. Equipping an MLP with features derived from these kernel smoothers enable it to also context-scale.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"I found the paper overall to be very well-written and a pleasure to read. The topic is extremely important, particularly in our post-ChatGPT era, and deals with a critical ability in Transformers. The contrast to MLPs sparks a fascinating discussion about the relative merits of different architectures.\", \"weaknesses\": \"I would love to see this manuscript published at ICLR, but there are a few oversights that prevent me from assigning a higher score. If these are able to be addressed, I will be delighted to raise my score.\\n\\nThe discussion on context-scaling in MLPs appears to be drawing from prior work by Tong and Pehlevan (https://arxiv.org/abs/2405.15618). The authors claim that MLPs do not context-scale, but Tong and Pehlevan seem to be showing otherwise. I may be misunderstanding both sides here, but Fig 1d and 1i of Tong and Pehlevan appear to demonstrate that at least MLP-Mixers continue to do well for arbitrary contexts. While MLP performance decays as context length increases in ICL regression, it doesn't appear to be the case for ICL classification. Were you able to look at classification tasks as well? Further, it looks like Tong and Pehlevan made the choice of plotting *excess* MSE above Bayes optimal rather than raw MSE. Because Bayes optimal MSE falls as context length increases, zero excess MSE would imply context-scaling. Looking at Figure 6b, for fewer dimensions than the $d= 8$ you tested, it does appear that MLPs adhere to the Bayes optimal level before failing at longer contexts. \\n\\nOverall, it would appear that context-scaling in MLPs does happen, but is bottlenecked by some aspect of insufficient data and long inputs, rather than some inability to implement kernel smoothers. MLP-Mixers, which do not have any product interactions that could implement a kernel smoother in an obvious way, continue to do well also. Indeed, it looks like in your Figure 7B, you do demonstrate some context-scaling in unmodified MLPs for 5M-pretraining tasks (top row), quite similar to using $\\\\psi_L$ features.\\n\\nAdditionally, I thought that kernel smoothers are weak in high dimensions, and require a dataset size that is exponential in the input dimension in order to interpolate well -- a classical curse of dimensionality. However, modern Transformers routinely handle token embeddings with dimensions that number in the tens of thousands, which would presumably defeat a kernel smoother even if it were exposed to an Internet-scale corpus -- and in-context, no less! It's quite possible I misunderstand this aspect of your analysis, but it seems implausible that a kernel smoother interpretation of attention is applicable to real-world Transformers?\\n\\nFinally, you mention there is a theoretical connection between having identity KQV matrices and the Hilbert estimate kernel, but I couldn't find the derivation in your manuscript. This could very well be a blatant oversight on my part, but could you point me to its location?\", \"questions\": \"See weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors study an important question: how does in-context learning in Transformers depend on the number of in-context examples as well as the number of overall tasks? They draw a connection to kernel smoothing and demonstrate that a simplified version of the Transformer architecture can implement this algorithm equally well. They show that in contrast to Transformers, MLPs do not exhibit scaling with in-context examples, but that a feature map inspired the Transformer's mechanism can yield successful improvement with more in-context examples.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"I really liked the introduction and thought that the question of context-scaling was nicely set up.\", \"I appreciated the authors considering a wide range of different tasks that they collected from the relevant literature.\", \"Building part of the SGPT into the MLP was a neat method for illustrating the origin of the distinct mechanism between either.\"], \"weaknesses\": \"Unfortunately, I do not think this work is ready for publication in its current state. Primarily, I believe that the paper does not provide sufficiently novel insight from the prior literature.\\n\\nNotably, improvement of Transformer performance with the number of in-context examples was already noted (as the authors lay out in the related work section), e.g. in Bai et al. (2023) and the prior theoretical literature also explains why this would be the case, as it draws the connection between ICL in Transformers and gradient descent and kernel smoothing --- both of which improve with the number of samples. I do not understand why previous theoretical results (e.g. Proposition 1 in von Oswald et al. (2023)) would only apply to fixed context length.\\n\\nIt is also unclear to me how SGPT provides a novel insight into the mechanism of ICL compared to, say, the construction in von Oswald et al. (2023), who also explicitly draw the connection to kernel smoothing and use a similar set of simple key, query, and value matrices (especially when $W_0=0$ in their construction in Appendix A.1). The authors argue that the simplicity of SGPT is a substantial strength of this paper, as it demonstrates that there are many problems such a simple architecture can solve. But it is unclear to me whether this a theoretical argument (which seems to make the connection to kernel smoothing, in which case I'm unsure how this is different from the insight by von Oswald et al.) or an empirical argument (in which case I think the authors would have to demonstrates concretely that SGPT outperforms kernel smoothing algorithms).\\n\\nAs I noted, I think the contrast to MLPs and providing the modified features to the MLPs was interesting. I think it would be important, however, to provide insight into *how* the vectorized component enables them to scale with the number of examples.\\n\\nTaken together, I think the paper in its current form is not sufficiently distinct from existing work --- or at least does not explain sufficiently clearly how it is different. As I noted above, I do think that the authors focus on a really interesting question (context scaling) that provides a different angle from prior work. However, I think for the paper to be ready for publication, I think this investigation would have to further explore how this angle can change our theoretical understanding of context scaling.\", \"questions\": [\"Why do previous theoretical results only apply to fixed context length, as stated in l.157-159? (See weaknesses.)\", \"The SGPT as defined in Eq. 13 appears to be more specific than the generic feature map $\\\\psi$ you are then introducing in Equation (9). Is that true and if so, can you explain how this generalized version connects to the SGPT as well as the generic Transformer architecture?\", \"Tong & Pehlevan (2024) also show that MLPs cannot context scale (Fig. 1d). This is currently not reflected in your related work section (l. 143-146). Could you please clarify and explain how your findings relate to these prior findings?\"], \"minor_comments\": \"L. 51: \\u201can non-exhaustive\\u201d -> \\u201ca non-exhaustive\\u201d\\nL. 157: typo?\\nL. 213: \\u201cidentify\\u201d -> \\u201cidentity\\u201d\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"> **Reviewer comment:** I think this is a neat finding. However, it seems that your theoretical construction cannot account for this, correct? I think this could be a good motivation better understanding how exactly this MLP overcomes this prior theoretical constructs. However, without such deeper analysis, I do not think it is a sufficient result.\\n\\n**Our response :** We are not entirely sure what you mean by \\\"theoretical construction,\\\" since we do not have any weight construction in our work. we appreciate a clarification. \\n\\nHowever, our explanation does indeed provide insight into why the MLP enables SGPT to outperform the prior understanding that transformers merely implement a single step of gradient descent. Specifically, our work shows that while attention effectively implements an in-context estimator (e.g., a single step of gradient descent or kernel smoothing), the MLP leverages additional information across tasks. A helpful way to conceptualize this is to view the MLP as acting like a \\ud835\\udc58-nearest neighbors estimator across tasks, enriching the model's ability to generalize.\\n\\nDeveloping a rigorous theoretical framework to formalize this insight and analyze the corresponding sample complexity (of task-scaling) of MLPs would indeed be an interesting and valuable direction for future work. However, such an analysis is beyond the scope of this paper. Instead, we aim to provide the key intuition and empirical evidence to motivate further study in this direction.\\n\\n> **Reviewer comment:** Isn't your construction here the exact same used in von Oswald et al., except without the scaled projection matrix though? Please clarify if I'm misunderstanding.\\n\\n**Our response :** We want to emphasize again that we do not have any construction in our paper. Furthermore, If you are referring to Proposition 1 in von Oswald et al.'s work, their construction is limited to a fixed context length N_y(using their notation). While this is effective for their analysis, it does not account for how context scaling works, which is a key focus of our work. Our contributions even go beyond this by exploring the interplay between context and task scaling, which their construction does not address.\"}",
"{\"comment\": \"Thank you for your response.\\n\\n> **Reviewer comment:** What I mean by construction is your specification of in Section 5.1 and Appendix B.\\n\\n**Our response :** The \\\\psi_L feature map in section 5.1 come from the prior work as we have cited them. We did not claim construction these features are part of our contribution. Our contribution is to show these features, when used together with raw features, can achieve context-scaling empirically.\\n\\n> **Reviewer comment:** With that in mind, I think equation (8) in their paper could directly be applied to different and implement a gradient step, correct?As such, it is unclear to me why this previous result only applies to a fixed context length.\\n\\n**Our response :** In the construction of Proposition 1, although we can apply equation (8) to different context lengths, equation (8) does not provide any performance guarantee as the \\\"learning rate\\\" might be suboptimal. Therefore, Proposition 1 does not address context-scaling. \\n\\n> **Reviewer comment:** I'm merely trying to better understand the claims about novel theoretical contributions in this paper.\\n\\n**Our response :** As we mentioned in lines 118 to 122, our theoretical contribution is by viewing attention form kernel smoothing perspective. Specifically, when the Hilbert estimate is chosen as the smoothing method, the model implements a statistically optimal (consistent) estimate as the context length approaches infinity.\\n\\nOur main contribution of our paper is as mentioned from line 114-125.\"}",
"{\"comment\": \"Thanks for the additional clarifications!\\n\\n>We would like to point out that Figure 7 includes four experiments. In three of them, the MLP clearly does not exhibit task-scaling behavior at all. In the specific case you mention (top-right panel), while there is an initial improvement with increasing context length, the performance does not continue to improve with further increase of in-context length, in contrast to attention based features \\\\psi_l and \\\\psi_H that provides consistent improvement with increased context length.\\n\\nThis is certainly true, so perhaps your claim should be adjusted to \\\"MLPs sometimes context scale, and sometimes do not context scale,\\\" or even \\\"MLPs context scale for small context lengths given sufficient data\\\"? To claim that \\\"MLPs do not context scale\\\" seems incorrect (or at least, overly coarse) in light of this evidence.\\n\\n>Please see lines 756-773. This derivation holds for any choice of kernel including the Hilbert kernel.\\n\\nPerhaps my confusion is the following. It seems like there are two possible interpretations to your claim about the Hilbert estimate:\\n1. A Transformer's self-attention can implement the Hilbert estimate.\\n2. A Transformer's self-attention matrix can be interpreted as a kernel smoothing operator. One such kernel is the Hilbert estimate.\\n\\nI interpreted your claim to be (1), but it sounds like you actually intend (2)? If the latter, I'm unsure how Hilbert estimates fit into your overall argument, and why you included them? Why not stop at the 1-step GD kernel (in eq 10), which has a natural connection to the attention matrix structure? Would a Transformer implement anything that looks like a Hilbert estimate in practice?\"}",
"{\"comment\": \"We thank the reviewer for the comments. We address the concerns as following,\\n\\n> **Reviewer comment:** \\n> Notably, improvement of Transformer performance with the number of in-context examples was already noted (as the authors lay out in the related work section), ... I do not understand why previous theoretical results (e.g. Proposition 1 in von Oswald et al. (2023)) would only apply to fixed context length.\\n\\n>It is also unclear to me how SGPT provides a novel insight into the mechanism of ICL compared to, say, the construction in von Oswald et al. (2023), who also explicitly draw the connection to kernel smoothing and use a similar set of simple key, query, and value matrices (especially when W0=0 in their construction in Appendix A.1). .... But it is unclear to me whether this a theoretical argument (which seems to make the connection to kernel smoothing, in which case I'm unsure how this is different from the insight by von Oswald et al.) or an empirical argument (in which case I think the authors would have to demonstrates concretely that SGPT outperforms kernel smoothing algorithms).\\n\\n\\n**Our response :** \\nA novel insight of our work beyond that of von Oswald et al. (2023) is that SGPT improves over both standard kernel smoothers and 1-step of GD. Indeed, one-layer of SGPT is better than both of these previous algorithms as it combines an MLP and kernel smoothing to simultaneously task and context scale. Empirically, we demonstrate in Fig. 7 that combining MLP with kernel smoothing features significantly outperforms 1-step of GD and kernel smoother on both linear regression and nonlinear tasks. This contrasts with von Oswald et al. (2023), who focus on demonstrating the equivalence of a one-layer transformer to a single step of GD (or kernel smoothing in their Appendix A.1). \\n\\n\\n> **Reviewer question:** \\n>Why do previous theoretical results only apply to fixed context length, as stated in l.157-159? (See weaknesses.)\\n\\n**Our response :** \\nA fixed context length is part of the assumptions in the previous theoretical results (Von Oswald et al. (2023), Ahn et al. (2023), Zhang et al. (2024a), Mahankali et al. (2024), Zhang et al. (2024b)). In particular, Proposition 1 in Von Oswald et al. (2023) states that for any given $N$ pairs of context examples, there exists a transformer such that its output is identical to the one-step GD output over the $N$ pairs of context examples. Therefore, Proposition 1 cannot be used if we test the same transformer with a different number of context examples. \\n\\nThe only work we are aware of for analyzing varying context-length is Theorem 5.3 in Wu et al. (2024), which allows testing a pre-trained attention model with a varying context length. However, their results are only tight when the context length is close to the one used in training. \\n\\n\\n> **Reviewer question:** \\n>The SGPT as defined in Eq. 13 appears to be more specific than the generic feature map \\\\psi you are then introducing in Equation (9). Is that true and if so, can you explain how this generalized version connects to the SGPT as well as the generic Transformer architecture?\\n\\n**Our response :** \\nThank you for pointing this out. Equations 13 and 9 are essentially the same \\u2013 the only difference is the second residual connection in the transformer. In the analysis presented in Section 5, we observed that adding this residual did not significantly impact the performance across the experiments in that section. Therefore, we chose to omit it in Equation 9 for the sake of simplicity. \\n\\n\\n> **Reviewer question:** \\n>Tong & Pehlevan (2024) also show that MLPs cannot context scale (Fig. 1d). This is currently not reflected in your related work section (l. 143-146). Could you please clarify and explain how your findings relate to these prior findings?\\n\\n**Our response :** \\nFig. 1d of Tong & Pehlevan is measuring excess risk, which refers to the difference between the model\\u2019s MSE and that of optimal ridge regression. Optimal ridge regression varies with context length, thus it is possible for excess risk to increase even though MSE decreases. In contrast, our notion of context-scaling is defined based on raw MSE. Furthermore, in that figure, even transformers have higher excess risk (error bars indicate performance approaches that of random prediction) as context length increases even though we know transformers do context-scale. \\n\\n**Our response to reviewer minor comments:** \\nThank you for the comments. We will make the modifications.\\n\\nWe hope our responses address your concerns. If any points remain unclear, we\\u2019d be happy to clarify. If the manuscript now aligns with your expectations, we\\u2019d appreciate your consideration of an updated score.\"}",
"{\"comment\": \"Thanks for the additional details! I have a few remaining questions\\n\\n> Per the reviewer\\u2019s suggestion, we added an additional experiment showing that MLPs cannot context scale even for classification. Below, we consider a classification problem where the label is given by the \\u201csign\\u201d of the output of a linear regression model - if the sign is positive, the label is 0 and if it is negative, the label is 1. We find MLP performance first improves and then gets worse with increasing context length indicating a failure to context scale.\\n\\nNot meaning to split hairs, but your setup sounds more like logistic regression than classification. By classification, I had more in mind a setup with multiple clusters defined in-context, as in [Reddy 2024](https://arxiv.org/abs/2312.03002), which Tong and Pehlevan seem to show *does* convincingly context scale, even by your definition. Do you have an idea why this might be the case?\\n\\n> Note that by context-scaling, we refer to a setting in which the number of pretraining tasks is fixed, and we expect performance to improve as context examples increase \\u2013 rather than initially improve and then degrade at a certain point.\\n\\nI'm not sure if I buy this perspective. If we take the numbers you provide in your second table and truncate them at context length 80 (a relatively long context, compared to where you start), it would appear that all models (including the MLP) context-scale quite convincingly. At least in this fairly large window, the MLPs performance improves substantially with longer context. That the MLPs' performance subsequently declines seems to be an independent issue. It's certainly up to you how you define the notion of \\\"context scaling\\\" exactly, but it seems somewhat disingenuous to claim that the MLP does not context scale when its performance improves substantially with longer contexts.\\n\\n> The derivation is in Appendix B\\n\\nMy apologies, there doesn't seem to be any derivation regarding the Hilbert estimate and Transformers in Appendix B. Perhaps it's still elsewhere, or I'm missing something blatant? I'm certain my math skills are not as sharp as yours, so I would love to see this worked out in detail, even if it's obvious!\\n\\nThanks again for the notes. Unfortunately my concerns remain, and I maintain my current score.\"}",
"{\"comment\": \"Thank you for your response. What I mean by construction is your specification of $\\\\psi$ in Section 5.1 and Appendix B.\\n\\n> We want to emphasize again that we do not have any construction in our paper. Furthermore, If you are referring to Proposition 1 in von Oswald et al.'s work, their construction is limited to a fixed context length N_y(using their notation). While this is effective for their analysis, it does not account for how context scaling works, which is a key focus of our work.\\n\\nI assume you mean the context length $N$, not $N_y$, right? I think $N_y$ in their paper is the output dimensionality. With that in mind, I think equation (8) in their paper could directly be applied to different $N$ and implement a gradient step, correct? The only distinction lies in the projection matrix $\\\\eta/N I$; while this projection matrix would have to be held fixed at some value, this could be understood as different learning rates for different numbers of examples. As such, it is unclear to me why this previous result only applies to a fixed context length.\\n\\nIt is true that this paper does not explicitly point out an application to different context length and I want to be clear that I think there is value in explicitly studying the interplay between context scaling and task scaling. I'm merely trying to better understand the claims about novel theoretical contributions in this paper.\"}",
"{\"comment\": \"Thanks for the additional notes, and engaging in a long discussion!\\n\\n> To clarify, softmax attention does not implement the Hilbert estimate. Instead, softmax attention implements a kernel smoother with the exponential kernel.\\n\\nThis part of the discussion still confuses me, and I remain unsure about the overall intent. Is the goal to understand something about Transformers? If so, why bother with the Hilbert estimate in the first place, given that a vanilla Transformer (with softmax attention) cannot implement it? Or is it to demonstrate that kernel methods (including a kernel-based interpretation of attention) can context-scale? Though you weren't able to show consistency for the exponential kernel, why does it still seem to context scale quite well in your experiments? Put another way, I'm unsure how the theory you describe fits into the broader picture of your results, and it remains unclear if it's theory for the sake of having a theory component to your manuscript, or whether you're hoping to describe something deeper?\\n\\nIf you're able to clarify your point about context scaling in MLPs, and refine your theoretical treatment of this subject, I think this would make for a fantastic paper. Given the current issues, however, I retain some reservations about recommending full acceptance. I update my score to a 6.\"}",
"{\"comment\": \"> **Reviewer comment:** On the role of context-length: The authors state that existing works fail to explain the context scaling capabilities of transformers. In the example, I gave in 3a), the Bayes optimal predictor is implemented by the transformer. This Bayes optimal predictor of course improves with context length. In this sense, I don't quite get why do the authors say that existing works fail to explain context scaling capabilities.\\n\\n**Our response :** Again, while there exist constructions of neural networks that can implement optimal predictors, it is far from clear that these models can learn these solutions from data. This is a primary limitation of the works which focus on constructions, as noted in our related works section.\\n\\n> **Reviewer comment:** On MLPs and context scaling: The authors also mention that it is unclear whether MLPs scale with context and they are the first to dive into this. Perhaps the authors have missed https://arxiv.org/pdf/2311.18194.\\n\\n**Our response :** We are not sure what part of the linked paper is relevant to our work. In particular the models they considered are DeepSet which are not standard MLPs operating on vectorized data. We would appreciate a clarification.\\n\\nWe hope our responses address your concerns. If any points remain unclear, we\\u2019d be happy to clarify. If the manuscript now aligns with your expectations, we\\u2019d appreciate your consideration of an updated score.\", \"title\": \"Official Comment by Authors (Part 2)\"}",
"{\"comment\": \"Thanks for the additional clarification.\\n\\n>When we state that a model \\\"context scales,\\\" we mean it consistently improves as more context information is provided. Consistency is crucial because, with additional context data, one naturally expects a model to perform better. We will clarify this in the final version of the paper. Specifically, we emphasize that while MLPs may initially show improvements with added context, they do not consistently scale with context length.\\n\\nThis is fine, and makes for an interesting take, but should be clarified in your manuscript. Reading your document, it was not clear to me what you meant by \\\"MLPs do not context scale\\\" when their performance appears to improve sometimes as the context length increases. If you mean something weaker, this should be stated in the text.\\n\\n>Both interpretations (1) and (2) are correct, and we appreciate the opportunity to clarify. As mentioned in line 431, if the kernel is chosen to be the exponential kernel, then \\\\psi_K implements the soft-max attention head, which aligns with the original GPT-based attention head. However, we highlight the Hilbert kernel because the exponential kernel smoother is not a consistent predictor\\u2014it does not converge to the optimal solution. This distinction is why we emphasize the Hilbert kernel in our argument, as it provides a more robust theoretical foundation for understanding self-attention's capabilities and gives consistency guarantee for any statistical task.\\n\\nMy apologies, I still don't understand. If (1) is correct -- that is, a Transformer (in the original sense, with softmax attention) can implement the Hilbert estimate -- how do you show this? Is there a derivation somewhere in the manuscript? You appear to claim that softmax attention implements the exponential kernel, which you state is not a consistent predictor (whereas a Hilbert estimate is)? Do you mean to imply that a different choice of attention nonlinearity will implement a Hilbert estimate instead?\"}"
]
} |
E8S5Upr6oO | MGMapNet: Multi-Granularity Representation Learning for End-to-End Vectorized HD Map Construction | [
"Jing Yang",
"Minyue Jiang",
"Sen Yang",
"Xiao Tan",
"Yingying Li",
"Errui Ding",
"Jingdong Wang",
"Hanli Wang"
] | The construction of vectorized high-definition map typically requires capturing both category and geometry information of map elements. Current state-of-the-art methods often adopt solely either point-level or instance-level representation, overlooking the strong intrinsic relationship between points and instances. In this work, we propose a simple yet efficient framework named MGMapNet (multi-granularity map network) to model map elements with multi-granularity representation, integrating both coarse-grained instance-level and fine-grained point-level queries. Specifically, these two granularities of queries are generated from the multi-scale bird's eye view features using a proposed multi-granularity aggregator. In this module, instance-level query aggregates features over the entire scope covered by an instance, and the point-level query aggregates features locally. Furthermore, a point-instance interaction module is designed to encourage information exchange between instance-level and point-level queries. Experimental results demonstrate that the proposed MGMapNet achieves state-of-the-art performances, surpassing MapTRv2 by 5.3 mAP on the nuScenes dataset and 4.4 mAP on the Argoverse2 dataset, respectively. | [
"Online HD map construction,vectorized representation,autonomous driving"
] | Accept (Poster) | https://openreview.net/pdf?id=E8S5Upr6oO | https://openreview.net/forum?id=E8S5Upr6oO | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"w82MT8tdcd",
"vk8xqR85vm",
"ubFnemCDe0",
"sjw0djvehd",
"pvVLRXYVTx",
"lfXTEv2siI",
"gsRg19HTeE",
"fi8MHXzh46",
"bZfxYqHu0D",
"alM7WGpOIn",
"SaNYSvjUbO",
"Ok4rX1p4wc",
"OdazzVI5TG",
"GpjmoFdrAE",
"CWQwuwN7fa",
"8Pm9hUrZYx",
"8J7K97ATwF",
"6j3OmSGhNT",
"5alARrCs54",
"1L2uvwvNQI"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1737523706710,
1731991141454,
1730606129382,
1732615169944,
1731989447077,
1732530179941,
1731989082247,
1732587622243,
1732540098268,
1731985292877,
1732569990408,
1732612066304,
1732616054980,
1731985244287,
1732440117816,
1734318662870,
1730438219404,
1732615279662,
1730695018434,
1732539200912
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Reviewer_cU8c"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Reviewer_hewp"
],
[
"ICLR.cc/2025/Conference/Submission5440/Reviewer_cU8c"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5440/Area_Chair_kAhD"
],
[
"ICLR.cc/2025/Conference/Submission5440/Reviewer_pXZj"
],
[
"ICLR.cc/2025/Conference/Submission5440/Reviewer_pXZj"
],
[
"ICLR.cc/2025/Conference/Submission5440/Reviewer_hewp"
],
[
"ICLR.cc/2025/Conference/Submission5440/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response to Reviewer pXZj\", \"comment\": \"We thank the reviewer for the supportive comments. The detailed response to each point is as follows.\\n\\n> **W1.The paper\\u2019s description can be overwhelming for readers who are not deeply familiar with the HD map construction topic (e.g., me).**\\n\\n- Thank you for your valuable suggestion.\\n- The map reconstruction task primarily involves obtaining unified Bird\\u2019s-Eye View (BEV) features $\\\\mathbf{F}_{bev}\\\\in\\\\mathbb{R}^{C\\\\times H\\\\times W} $( $C, H, W$ represent the feature channels, height, and width of the BEV feature) from surround-view cameras images $I$.\\nSubsequently, a DETR-like decoder is employed to perceive and vectorize map elements.\\nEach vectorized map element $\\\\mathbf{P}$, \\ncomprises a category (such as pedestrians, dividers, and boundaries) and a series of consecutive vector coordinate points $\\\\\\\\{v_i\\\\\\\\}\\\\_{i=0}^{N_p-1}$, where $N_p$ is the number of points and $v_i$ is the coordinate of the $i$-th point. This vectorized representation allows for a more precise depiction of map elements, resulting in high-precision map polylines.\\n\\n- The principal challenge of this problem is to capture precise local coordinates while simultaneously learning and modelling each instance.\\nTo address this challenge, our model introduces a multi-granularity representation mechanism that facilitates the simultaneous modeling of entire instances and their intricate points, thereby improving the performance of High-Definition vectorized map representations.\\n\\n> **W2.The paper could be strengthened by providing a detailed analysis of the time and space complexity of MGMapNet compared to baseline models.**\\n\\n| | MapTR [ICLR2023] | MapTRv2 [IJCV2024] | MGMap [CVPR2024] | MapQR [ECCV2024] | MGMapNet |\\n|:-----------------:|:-----------------:|:-------------------:|:----------------:|:-----------------:|:-----------:|\\n| **FPS** | **16.9** | 14.1 | 12 | 11.9 | 11.7 |\\n| **GPU mem. (MB)** | **2314** | 2656 | 2402 | 2648 | 2790 |\\n| **Params. (MB)** | **35.9** | 40.3 | 55.9 | 125.3 | 70.1 |\\n| **NuScenes (mAP)** | 50.3 | 61.5 | 64.8 | 66.4 | **66.8** |\\n| **Argoverse2 (mAP)**| 58 | 67.4 | - | 68.2 | **71.2** |\\n\\n- We agree that analyzing the computational and memory resources is essential for assessing efficiency. \\n\\n- In Table above, we present a comprehensive comparison of the latest models alongside the primary baseline, detailing GPU memory usage, FPS, parameter counts, and performance. \\nTime and space complexity can be derived from FPS and GPU memory comparisons.\\n\\n\\n - **GPU mem. comparison.** The memory usage (MB) of MapTR, MapTRv2, MGMap, MapQR, and MGMapNet are 2314, 2656, 2402, 2648, and 2790 respectively. Our MGMapNet has a slight increase in memory usage compared to other methods, which is understandable given we retained two types of queries for different output regressions and classifications.\\n\\n - **FPS comparison.** MGMapNet, MapQR, and MGMap show similar performance with FPS scores of 11.7, 11.9, and 12, respectively. Although slightly slower than MapTRv2, MGMapNet's inference time complexity is similar to that of the latest methods.\\n\\n - **Params comparison.** The parameters (MB) of MGMapNet, MapQR, and MGMap are 70.1, 125.3, and 55.9, respectively. Even though MGMapNet has a slightly higher parameter count due to its Multi-Granularity query design and Point Instance Interaction, it still outperforms and has fewer parameters than MapQR\\u2019s 125.3MB. We believe there\\u2019s substantial room for optimization in MGMapNet.\\n\\n- In an overall efficiency analysis, our MGMapNet, thanks to its multi-granularity representation, achieves better performance while maintaining similar parameters, speed, and memory usage compared to the latest methods. The limitations of our method in terms of speed have been mentioned, but we believe there is a significant room for optimization. Therefore, MGMapNet remains a competitive model.\\n\\n> **W3.It is not clear why the training epochs are set to have multiple values for various models, and why the long training schedule leads to fair comparison.**\\n\\n- Previous comparative studies generally employed two distinct training epoch configurations, conducting long epochs to maintain consistency with prior methods and ensure a fair comparison.\\n\\n- On one hand, shorter training epochs might emphasize the models\\u2019 short-term overall performance and may result in some models not fully converging. On the other hand, longer training epochs ensure that models generally reach a converged state. This dual setup allows us to evaluate the performance of each model more objectively and comprehensively.\\n\\nThanks again and we are happy to take any questions / further discussions.\"}",
"{\"summary\": \"The paper introduces MGMapNet, designed to effectively model map elements through a multi-granularity representation by integrating both coarse-grained instance-level and fine-grained point-level queries to enhance map modeling. The framework employs a Multi-Granularity Aggregator. Besides, there is a Point Instance Interaction module, which facilitates the exchange of information between the instance-level and point-level queries, thereby improving the overall modeling capability of the network.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe problem studied in the paper is very important in practice and find applications in real world.\\n2.\\tThe paper is clearly written and easy to follow. \\n3.\\tExperiments are conducted to verify the performance of the proposed method.\", \"weaknesses\": \"1.\\tThe challenges and contributions of the proposed techniques require further elaboration. What are the specific challenges to design these techniques in section 3?\\n2.\\tThe encoders and decoders are mostly MLP-based. It is difficult to understand the logic, rationale and difficulty to apply the techniques.\\n3.\\tSome evaluation metrics in experiments are not explained, e.g. AP_ped and AP_div, and AP_bou in table 1.\\n4.\\tHow are the proposed techniques related to High-Definition?\\n5.\\tQuality of figures and tables can be improved. For example, Table 4 has too big font size.\", \"questions\": \"1.\\tIs it possible to consider instance-2-instance attention? Why not compare this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thanks for your feedback\", \"comment\": \"Dear Reviewer cU8c,\\n\\nThank you for your feedback! We greatly appreciate the constructive reviews and valuable suggestions to enhance our work.\\n\\nBest regards,\\n\\nAuthors of #5440\"}",
"{\"title\": \"Response to Reviewer cU8c Part II (Part II of II)\", \"comment\": \"> **Q1:Is it possible to consider instance-2-instance attention? Why not compare this?**\\n\\n- Thank you for your valuable suggestion. \\n\\n- In fact, MGMapNet employs instance-to-instance attention. As shown in Figure 2 of the paper, the Self Attention preceding Multi-Granularity Attention follows this structure and facilitates interactions among instances, aligning with the default configuration.\\nConsequently, incorporating instance-to-instance attention within MGA is unnecessary, as inter-instance interactions are already implemented prior to input through the Self Attention module.\\n\\n- Multi-Granularity Attention begins by generating queries at multiple granularities and optimizing them through point-instance interaction. The point-to-instance attention enables corresponding instances to more effectively aggregate point features. \\nAdditionally, point-to-point attention not only within the current layer but also emphasises point features from the previous layer of the decoder.\\nThis integrated approach facilitates a coarse-to-fine refinement process for high-precision point queries, enhancing the model's ability to capture detailed spatial information.\\nSince the point query in MGA is optimized from coarse to fine, and the instance query itself is a coarse-grained query that has already been implemented in Self Attention, we did not include instance-to-instance attention within MGA.\\n\\n- In future work, integrating Self Attention within MGA to explore more comprehensive multi-granularity interactions constitutes a potential research direction.\\n\\nThanks again and we are happy to take any questions / further discussions.\"}",
"{\"title\": \"Kind Reminder to Reviewer cU8c for the Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer cU8c,\\n\\nThanks for your thoughtful review and valuable comments. During the discussion period, we also want to get some feedback from you.\\n\\nActually, your comments are particularly insightful, and we believe they will help strengthen our work significantly. In our rebuttal, we have carefully addressed each of your concerns with detailed responses. Specifically, we have included \\n- Included detailed explanations of the technological challenges and contributions, descriptions of the evaluation metrics used in the experiments, improved chart formats for better readability, and reorganized Figure 2 for enhanced clarity and aesthetics in the revised version we have uploaded.\\n- Provided explanations of the MLP-based model structure, proposed techniques related to High-Definition, and addressed the issue of instance-to-instance attention during the rebuttal phase.\\n\\nWe would sincerely appreciate it if we could get some feedback from you regarding the above concerns. If you have any further questions or require additional clarifications, please do not hesitate to let us know.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nAuthors of #5440\"}",
"{\"title\": \"Response to Reviewer cU8c Part I (Part I of II)\", \"comment\": [\"We thank the reviewer for the supportive comments. The detailed response to each point is as follows.\", \"> **W1.The challenges and contributions of the proposed techniques require further elaboration. What are the specific challenges to design these techniques in section 3?**\", \"We apologize for the unclear expression.\", \"The main challenge is to balance the trade-off between detail and overview representation. Existing state-of-the-art (SOTA) methods, such as MapTR and MapTRv2, utilize point-level queries to characterize map elements, whereas StreamMapNet employs instance-level queries. Each granularity of queries has its own advantages and disadvantages:\", \"**Instance-level queries** excel at capturing the overall category information of road elements but may struggle to accurately represent geometric details, especially for irregular or elongated map elements.\", \"**Point-level queries** can provide rich, detailed information, but can only represent instances by combining multiple point-level queries, lacking an overarching description of map elements.\", \"The balance between detail and overview remains a major challenge in current research, and existing methods do not adequately address this issue.\", \"Our contribution is the simultaneous acquisition of both detailed and comprehensive instance features. The multi-granularity mechanism is designed to solve this problem by effectively maintaining and updating queries of different granularities. MGMapNet overcomes the limitations of existing methods, achieving a balance between detailed and overall feature representations.\", \"We will elaborate it further in the revised version.\", \"> **W2.The encoders and decoders are mostly MLP-based. It is difficult to understand the logic, rationale and difficulty to apply the techniques.**\", \"We did not fully understand this statement. If we are not mistaken, the reviewer may be referring to positional encoding.\", \"The MLP-based structure in our decoder, as detailed in Equations 2 and 5, is primarily utilized to generate positional encodings. While many existing methods employ sinusoidal encodings for this purpose, our empirical observations indicate that sinusoidal encodings may exhibit inferior generalization capabilities and reduced performance compared to adaptive encodings implemented via MLPs.\", \"In Equation 8, employing an MLP to ensure the adequate aggregation of queries at two levels of granularity is considered a relatively reasonable approach.\", \"> **W3.Some evaluation metrics in experiments are not explained, e.g. $AP_{ped}$ and $AP_{div}$, and $AP_{bou}$ in table 1.**\", \"Thank you for pointing out the omission of explanations for some evaluation metrics in our experiments. We apologize for this oversight. Specifically, $AP_{ped}$ and $AP_{div}$, and $AP_{bou}$ refer to the Average Precision for pedestrians, dividers, and boundaries, respectively.\", \"We will include these clarifications in the revised manuscript to enhance clarity and comprehensiveness.\", \"> **W4.How are the proposed techniques related to High-Definition?**\", \"Traditional autonomous driving systems rely on high-precision maps created through offline annotation. In contrast, this paper primarily addresses sensor-based high-precision map reconstruction.\", \"The construction of high-definition maps using purely visual methods has become an increasingly challenging endeavour since MapTR.\", \"Vectorized HD map construction requires a higher level of precision, as map elements are represented using vectorized points to accurately depict features such as pedestrians, dividers, and boundaries. Our proposed MGMapNet, along with recent papers in the field, are specifically designed to enhance the accuracy and precision of map reconstruction.\", \"> **W5.Quality of figures and tables can be improved. For example, Table 4 has too big font size.**\", \"Thank you for your suggestions. The fonts in the figures were an oversight in our writing, and we will correct them in the revised version.\"]}",
"{\"title\": \"Thanks for your feedback\", \"comment\": \"Dear Reviewer hewp,\\n\\nThank you for your feedback. We greatly appreciate the valuable suggestions to enhance our work.\\n\\nBest regards,\\n\\nAuthors of #5440\"}",
"{\"title\": \"Kind Reminder to Reviewer pXZj for the Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer pXZj,\\n\\nThank you sincerely for your thoughtful review and valuable comments. Although we may not be from exactly the same field, we deeply appreciate your insights and the opportunity to engage with your feedback during the discussion period.\\n\\nYour comments are highly insightful, and we believe they offer meaningful guidance to further strengthen our work. In our rebuttal, we have carefully addressed each of your concerns with detailed responses. Specifically, we have:\\n\\n- Added an experimental analysis on efficiency in the supplementary materials and included additional challenges and contributions in Sections 1 and 3 in the latest revised version we uploaded.\\n\\n- Provided explanations regarding the experimental setup and problem description in the rebuttal.\\n\\nWe hope our responses have addressed some of your concerns. If there are any additional questions or further clarifications required, please do not hesitate to let us know. We would be more than happy to provide further details.\\n\\nThank you once again for your time and thoughtful review.\\n\\nBest regards,\\n\\nAuthors of #5440\"}",
"{\"title\": \"Response to Reviewer hewp Part II (Part II of II)\", \"comment\": \"> **W3. Do you use the distance from each point to the ground truth points as the loss for $\\\\mathcal{L}_{pts}$?Can you describe each loss term in detail?**\\n\\n- Yes, we do, and we apologize for the unclear descriptions of each loss term.\\n As you mentioned, the $\\\\mathcal{L}_{pts}$ is the loss calculated between each predicted point and the corresponding ground truth points. \\n\\n- Firstly, we find an optimal instance-level label assignment between predicted map elements and ground truth map elements using the Hungarian algorithm at the instance-level.\\nSecondly, the predicted points are paired with the ground truth points using the Hungarian algorithm at the point level, establishing a one-to-one point correspondence between \\n$\\\\mathbf{p}\\\\_i^{pred}$ and $\\\\mathbf{p}\\\\_i^{gt}$ ($\\\\mathbf{P}\\\\_i$ is the $i$-th coordinate points of instance). Following this matching, a point-wise $\\\\mathcal{L}\\\\_{pts}$ loss is applied to optimize the predictions. This loss calculation approach follows the methodology outlined in MapTR and MapTRv2.\\nThe $\\\\mathcal{L}\\\\_{pts}$ is formulated as:\\n$$ \\\\mathcal{L}_{pts} = \\\\frac{1}{N_p}\\\\sum\\\\_{i=1}^{N_p} \\\\\\\\| \\\\mathbf{p}_i^{pred} - \\\\mathbf{p}_i^{gt} \\\\\\\\|_1 $$\\nwhere $\\\\mathbf{p}_i^{pred}$ and $\\\\mathbf{p}_i^{gt}$ are the predicted and ground truth positions of point $i$, respectively, and $N_p$ is the number of points.\\n\\n- The $\\\\mathcal{L}\\\\_{pts}$, $\\\\mathcal{L}\\\\_{cls}$, $\\\\mathcal{L}\\\\_{dir}$ and $\\\\mathcal{L}\\\\_{dense}$ loss align with MapTRv2.\\nAdditionally, we introduce auxiliary losses, which comprise two components: the instance segmentation loss $\\\\mathcal{L}\\\\_{insseg}$ and the reference point loss $\\\\mathcal{L}\\\\_{ref}$. \\n\\n- The instance segmentation loss, denoted as $\\\\mathcal{L}\\\\_{insseg}$, not only segments BEV features but also retrieves more precise instance localization information for each individual query.\\nFirst, we compute the instance segmentation masks $M_{i}^{pred} \\\\in \\\\mathbb{R}^{H\\\\times W \\\\times N_q }$ by performing dot product operations between the updated instance-level queries $\\\\mathbf{Q}\\\\_{ins}\\\\in\\\\mathbb{R}^{N_q \\\\times C}$ and the BEV features $\\\\mathbf{F}\\\\in\\\\mathbb{R}^{H\\\\times W\\\\times C}$. \\nSubsequently, we utilize the indices of positive samples obtained through the Hungarian algorithm to retrieve their corresponding masks $M_{pos}^{pred}$ and ground truths $M_{pos}^{gt}$.\\nFor each positive sample instance mask $M_{pos}^{pred}\\\\in\\\\mathbb{R}^{H\\\\times W\\\\times N_{pos}}$ ($N_{pos}$ is the total number of positive query), we separately compute the segmentation loss by employing both Binary Cross-Entropy loss $\\\\mathcal{L}\\\\_{bce}$ and Dice loss $\\\\mathcal{L}\\\\_{Dice}$.\\nThe process of generating the $M_{pos}$ is formulated as:\\n$$M^{pred}=\\\\mathbf{F} \\\\cdot \\\\mathbf{Q}\\\\_{ins}^{T},$$\\nwhere $\\\\cdot$ denote dot product operations and the $\\\\mathcal{L}\\\\_{insseg}$ is formulated as:\\n$$\\\\mathcal{L}\\\\_{insseg} = \\\\frac{1}{N\\\\_{pos}} \\\\sum\\\\_{i=1}^{N_{pos}} (\\\\mathcal{L}\\\\_{dice}(M\\\\_{pos,i}^{pred},M\\\\_{pos,i}^{gt})+\\\\mathcal{L}\\\\_{bce}(M\\\\_{pos,i}^{pred},M\\\\_{pos,i}^{gt})), $$\\nwhere $M_i$ denote the $i$-th positive instance mask..\\n\\n- Additionally, the reference point loss $\\\\mathcal{L}\\\\_{ref}$ provides auxiliary supervision for reference points during each iteration of the decoder. \\nSimilar to the $\\\\mathcal{L}\\\\_{pts}$ loss, The $\\\\mathcal{L}\\\\_{ref}$ is computed by applying the $\\\\mathcal{L}\\\\_{pts}$ loss to the reference points $\\\\textbf{RF}$ and ground truth points $\\\\textbf{P}^{gt}$ at each layer. This ensures that each sampling point achieves a more reasonable and accurate distribution.\\n\\n- We will provide further elaboration on these details in the revised version.\\nThanks again and we are happy to take any questions / further discussions.\"}",
"{\"title\": \"Response\", \"comment\": \"The authors' response address my concern and I will keep my score as 6.\"}",
"{\"title\": \"Thanks for the feedback.\", \"comment\": \"The feedback has addressed most of my concerns. I have updated the rating. Thanks.\"}",
"{\"title\": \"Thanks for your feedback\", \"comment\": \"Dear Reviewer pXZj,\\n\\nThank you for your feedback! We sincerely appreciate your constructive review and valuable suggestions, which have been incredibly helpful in improving our work.\\n\\nBest regards,\\n\\nAuthors of #5440\"}",
"{\"title\": \"Response to Reviewer hewp Part I (Part I of II)\", \"comment\": \"We thank the reviewer for the supportive comments. The detailed response to each point is as follows.\\n\\n> **W1. The citations of the whole paper are wrong.**\\n\\n- This was an oversight in our writing. In the revised version, we will correct all the citations to ensure they follow the appropriate format.\\n\\n> **W2. From Figure 1 and 3. we can see the advantages of MGMapNet over other models. However, I can still see that the extracted lanes by MGMapNet are sometimes zigzagged while the ground truth lines are straight lines. I wonder whether you can add some regularity or loss terms to avoid this.**\\n\\n- Thank you very much for your insightful suggestions. Geometric properties have always been a focus of research. PivotNet (CVPR 2023) uses the inherent properties of polylines to introduce the concepts of pivot points and collinear points.\\n\\n- Your suggestion to put more emphasis on the positions of these pivot points is indeed reasonable. However, in our current version, we have primarily developed more advanced representations within the MapTRv2 framework. \\n\\n- Employing instance segmentation as auxiliary supervision facilitates the optimization of specific zigzagged points.\\nPrecise rasterized instance masks for each instance query can partially reduce point instability. In addition, the use of auxiliary instance segmentation loss, also adopted in our method, further prevents the generation of anomalous points within map elements.\\nHowever, as shown in Figure 3, such corner cases cannot be completely avoided. \\n- In future work, we aim to further optimize our approach by exploiting geometric relationships.\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": [\"We thank the reviewers for their insightful comments and for the positive feedback provided on the paper.\", \"We have uploaded a new version of the manuscript, incorporating reviewer suggestions and addressing the points raised, marking changes in blue text. In particular, we have included:\", \"Corrected the citation format throughout the paper. (Reviewer hewp)\", \"Added a detailed description of the loss function in the appendix. (Reviewer hewp)\", \"Added a detailed description of the technological challenges and our contributions. (Reviewer cU8c)\", \"Added a description of some evaluation metrics in the experiment. (Reviewer cU8c)\", \"Modified the chart formats for improved legibility. (Reviewer cU8c)\", \"Figure 2 has been reorganized for improved clarity and aesthetics. (Reviewer cU8c)\", \"Included more detailed efficiency comparisons in the appendix. (Reviewer pXZj)\", \"We believe these revisions have strengthened the paper and look forward to further feedback. Below, we offer specific responses to the individual comments from each reviewer.\"]}",
"{\"metareview\": \"The reviewers agree that the problem studied is important in practice, the paper is clearly written and easy to follow. and the experiments are extensively conducted to verify the effectiveness of the model. The reviewers also raised some issues on the unclear explanation on the motivations and contributions of the paper. The writing and more explanations on the figures should be also improved. Some typos should be careful corrected in the final version.\", \"additional_comments_on_reviewer_discussion\": \"The authors have provided rebuttals. During the discussion, some reviewers think some of their concerns are addressed and thus they would like to raise their scores.\"}",
"{\"summary\": \"The paper presents MGMapNet, a framework designed for end-to-end vectorized High-Definition map construction. MGMapNet introduces a multi-granularity representation that integrates both instance-level and point-level queries to effectively capture both category and geometric information of road elements. The proposed Multi-Granularity Aggregator and Point Instance Interaction modules allow information exchange between the two granularities, resulting in enhanced prediction accuracy. Experimental results demonstrate state-of-the-art performance on benchmark datasets such as nuScenes and Argoverse2, surpassing various baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"S1. The paper introduces a method that combines both coarse-grained instance-level and fine-grained point-level queries, effectively capturing both global category information and local geometric details of map elements.\\n\\nS2. The design of the Multi-Granularity Aggregator and Point Instance Interaction modules facilitates efficient and effective information sharing between instance-level and point-level queries.\\n\\nS3. The proposed MGMapNet framework outperforms several baseline models, achieving the state-of-the-art performance in HD map construction.\", \"weaknesses\": \"W1. The paper\\u2019s description can be overwhelming for readers who are not deeply familiar with the HD map construction topic (e.g., me). For example, it lacks a formal problem formulation, which would help in grounding the research context. Additionally, the method's explanation is a bit difficult to follow.\\n\\nW2. The paper could be strengthened by providing a detailed analysis of the time and space complexity of MGMapNet compared to baseline models. Given that efficiency is a key motivation, understanding how MGMapNet performs in terms of computational and memory resources would be beneficial.\\n\\nW3. It is not clear why the training epochs are set to have multiple values for various models, and why the long training schedule leads to fair comparison.\", \"questions\": \"Please clarify the comments for W1-W3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for your response. I have no more questions and will maintain my score.\"}",
"{\"summary\": \"This paper presents MGMapNet, a multi-granularity map network for end-to-end vectorized HD map construction based on multi-scale bird\\u2019s eye view (BEV) images. Evaluations on four datasets show the effectiveness of MGMapNet over multiple baseline models.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The contributions are highlighted. The novel contributions compared with previous approaches are also discussed properly.\\n2. Both quantitative and qualitative results are shown and discussed. Ablation studies are conducted in a meaningful way.\", \"weaknesses\": \"1. The citations of the whole paper are wrong. It should be \\\\citep{} instead of \\\\cite{}.\\n\\n2. From Figure 1 and 3. we can see the advantages of MGMapNet over other models. However, I can still see that the extracted lanes by MGMapNet are sometimes zigzagged while the ground truth lines are straight lines. I wonder whether you can add some regularity or loss terms to avoid this. Maybe for those straight lines, resample their vertices along the straight lines every times during model training so that the model learns the linear feature instead of individual point locations?\\n\\n3. Can you describe each loss term in detail? Do you use the distance from each point to the ground truth points as the loss for L_{pts}?\", \"questions\": \"See the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Kind Reminder to Reviewer hewp for the Feedback on Our Rebuttal\", \"comment\": \"Dear Reviewer hewp,\\n\\nThank you for your thoughtful review and valuable feedback. We sincerely appreciate the time and effort you have dedicated to evaluating our work.\\nYour comments are highly insightful, and we believe they have provided us with an excellent opportunity to further refine and improve our paper. In our rebuttal, we have carefully addressed all of your concerns and provided detailed responses.\\nSpecifically, we have:\\n\\n- In the latest revised version we uploaded, we corrected the citation format throughout the paper and added a detailed description of the loss function in the appendix.\\n- Regarding weakness 2 concerning the zigzagged corner case, we provided an explanation and offered some attempts to address it.\\n\\nWe would be truly grateful to receive your feedback on the points we have addressed. If you have any further questions or require additional clarifications, please do not hesitate to let us know.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nAuthors of #5440\"}"
]
} |
E7gjRqFT9O | FlashEVA: Accelerating LLM Inference via Efficient Attention | [
"Juan Gabriel Kostelec",
"Qinghai Guo"
] | Transformer models have revolutionized natural language processing, achieving state-of-the-art performance and demonstrating remarkable scalability. However, their memory demands, particularly due to maintaining full context in memory, pose significant challenges for inference. In this paper, we present FlashEVA, an efficient implementation of EVA (Efficient Attention via Control Variates), and demonstrate how to finetune transformers to adapt to FlashEVA attention. Our method enables fine-tuning of Transformer models with as few as $1.6B$ tokens while preserving effectiveness across various downstream tasks. Notably, FlashEVA achieves up to $6.7x$ higher throughput during inference compared to standard Transformer implementations. Despite these improvements, we observe limitations in retrieval-focused tasks. Our implementation offers control over the trade-off between throughput and accuracy through adjustable hyperparameters, providing greater flexibility. This work represents a significant step towards more efficient and adaptable Transformer-based models for inference. | [
"efficient attention",
"transformers",
"large language models",
"inference"
] | Reject | https://openreview.net/pdf?id=E7gjRqFT9O | https://openreview.net/forum?id=E7gjRqFT9O | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"qAdBVwgAkf",
"XqS0eRdqR1",
"VIJDqFKnJ7",
"OcOaWdl0dC",
"C8Vsvshdme",
"A25FrC6Iz8",
"6pqki5q9EL"
],
"note_type": [
"official_review",
"official_comment",
"decision",
"official_review",
"official_review",
"meta_review",
"official_review"
],
"note_created": [
1730609064740,
1732647586718,
1737524196118,
1730705569533,
1730184521329,
1733976024683,
1730603323024
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12506/Reviewer_H6Vo"
],
[
"ICLR.cc/2025/Conference/Submission12506/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12506/Reviewer_Y2bY"
],
[
"ICLR.cc/2025/Conference/Submission12506/Reviewer_x8Hx"
],
[
"ICLR.cc/2025/Conference/Submission12506/Area_Chair_JVrT"
],
[
"ICLR.cc/2025/Conference/Submission12506/Reviewer_vRj4"
]
],
"structured_content_str": [
"{\"summary\": \"The paper introduces FlashEVA, an efficient implementation of EVA to improve Transformer model performance during inference by reducing memory usage. FlashEVA allows fine-tuning of Transformers with few tokens. While it excels in many applications, it has limitations in retrieval-focused tasks. The method also offers adjustable settings for balancing throughput and accuracy, enhancing flexibility for different use cases.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper is well-organized and clear, effectively presenting complex ideas in efficient attention mechanisms. Background and motivation are well-integrated, and the experimental results are systematically laid out, with tables and figures that clarify performance gains. The discussion of trade-offs and limitations shows a balanced approach, enhancing the paper\\u2019s readability and impact.\", \"weaknesses\": \"A primary limitation of this paper is its lack of significant novelty beyond the existing EVA framework. While FlashEVA offers efficiency gains, these improvements are largely a result of optimizing existing CUDA/Triton kernels rather than introducing new concepts. As such, the contribution may appear incremental, particularly given the relatively modest improvements in throughput and memory efficiency.\\n\\nWhile the paper briefly compares FlashEVA with the Mamba model, it does not thoroughly examine their differences or provide a clear rationale for preferring FlashEVA. Mamba significantly outperforms FlashEVA across most tasks, even with fewer computational resources. To provide a more balanced view, the authors could explore the advantages FlashEVA may offer over Mamba, or discuss potential benefits of integrating aspects of both methods. Such a discussion could help clarify FlashEVA\\u2019s unique contribution and when it might be favored over Mamba, thereby enhancing the overall value of the work.\\n\\nFlashEVA demonstrates impressive gains with long-sequence generation but shows limited improvement for shorter sequences. This restriction reduces the model's scalability, especially in applications requiring shorter or mixed-length sequences. The authors could mitigate this issue by optimizing the random feature computation or implementing adaptive techniques that reduce computational overhead for shorter sequences.\", \"questions\": \"Please address the questions raised in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Addressing novelty criticism\", \"comment\": \"We would like to thank all the reviewers for their constructive feedback. We are happy to hear the that you have found the paper easy to read, and the experimental results comprehensive. On the other hand, most have raised objections regarding the novelty of the method, which is what we would like to address.\\n\\nWe agree with the reviewers that the proposed method is simple and can appear incremental over the existing EVA approach. Nevertheless, we would argue that this does not mean that it is not useful and of interest to the community.\\n\\nFirst, we propose to apply the EVA framework in a different setting as the original authors. While they have mainly focused on the training from scratch, we focus on adaptation of pretrained transformers into linearized variants in order to improve model inference efficiency. For this, we leverage the EVA framework due to its Softmax attention approximation guarantees.\\n\\nFuthermore, we would like to highlight two crucial problems that have limited an adoption of efficient attention approaches, especially for the model inference.\\n\\n1. There is a need for efficient Implementations in order to be able to materialize the theoretically promised speedups of these methods. By showing that EVA attention can be reformulated as a Softmax attention over modified set of keys and values, we can leverage community improvements to obtain a very efficient implementation of the EVA attention. This allows FlashEVA to optimize inference in two ways: for the prompt processing, it leverages the lower computational complexity of the efficient attention, while for the autoregressive generation/decoding, it benefits from the lower memory footprint, which is the main bottleneck in the decoding process.\\n\\n2. Linearized attention approaches have generally faced a performance degradation compared the transformers using standard softmax attention. EVA attention has solid theoretical grounding and with its hyperparameters we can tune how well we will approximate softmax attention. In fact, our method consistently outperforms Diljang attention over all model scales, showing that the EVA framework is adequate for this task. Nevertheless, as mentioned, all linear attention approaches will suffer a decreased performance on the retrieval focused tasks, as that is a fundamental limitation of the linearized approach [1]\\n\\nGiven the above points, we feel that the FlashEVA method is a contribution of interest to the wider community.\\n\\nFurthermore, we would like to expand and contextualize further the main experimental results. The main comparison baseline is the Diliang attention, a recent approach showing comparable performance to Softmas attention transformers on inference, with substantial efficiency improvements. As can be seen, our method outperforms consistently the Dijiang attention for all model scales. In fact, at larger model scales, in some tasks we achieve up to 5% better accuracy. Indeed, one of the limitations of the Dillang method we observed was its variability in performance, drastically underperforming the baseline Transformer in certain tasks.\\n\\nWe additionally compared FlashEVA to a Sliding Window attention baseline, with a larger sliding window (to match the additional Keys and Values in the FlashEVA approach). Note, in the main results of FlashEVA we did not use a sliding window approach, which would likely improve performance (the reason was that we observed some numerical instabilities when testing this on larger models, so for the sake of saving resources, we only used the local window approach in the main experiments). Only in the largest 18 model size we observe some performance regression compared to that baseline, however, that is mainly driven by subpar performance on the WSC and Winogrande tasks, which generally contain quite short sequences. For those sequences, the Sliding Window attention basically acts like full attention over the whole sequence (also seen in the fact that it basically achieves the same performance as the full attention variant)\\n\\nFinally, in terms of downstream task performance, we also do not expect to achieve higher performance as the baseline transformer, as we are merely trying to efficiently approximate the full attention.\\n\\nWe have thus shown a method that matches better baseline Transformer on downstream tasks, which has a principled way of trading off some performance for efficiency, and has practical speedups in the model generation.\\n\\nHaving said all that, we understand that the novelty criticism is valid, and ICLR might thus not be the best venue for this submission. We will take into account all the useful feedback (e.g. improving the paper writing to describe the FlashEVA better, introduce FlashAttention, more concrete comparison to Mamba, etc.) in any future submission.\\n\\n[3] Jelassi, S., Brandfonbrener, D., Kakade, S. M. & Malach, E. Repeat After Me: Transformers are Better than State Space Models at Copying. (2024) ArXiv: http://arxiv.org/abs/2402.01032\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper proposes FlashEVA, an efficient implementation of EVA, aimed at addressing the high memory demands of Transformer models during inference. FlashEVA allows for fine-tuning of large language models (LLMs) with minimal performance degradation while significantly reducing memory usage and increasing throughput. The method achieves this by maintaining a compressed cache of past context, which is less memory-intensive than computing the prefix during inference. However, it shows limitations in retrieval-focused tasks. The implementation offers adjustable hyperparameters to balance throughput, accuracy, and memory usage, providing flexibility for various applications.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well organized and presented.\", \"Experimental results that the adjustable hyperparameters allow for a trade-off between throughput, accuracy and memory usage.\", \"FlashEVA is compatible with existing optimized attention implementations and can leverage CUDA kernels for performance optimization.\"], \"weaknesses\": [\"The proposed method is overly simplistic and unimpressive. It looks like an implementation of FlashAttention with EVA which is already proposed.\", \"The experimental results are not persuasive since it doesn\\u2019t show the advantages compared to Dijiang and Sliding window as in Figure1. Instead, it is only a trade-off between Dijiang and Sling window.\"], \"questions\": [\"How about the performance when the model is larger such as 7B, 13B or 70B?\", \"Can the method compared to FlashAttention3 or combined with it?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper present FlashEVA, an efficient implementation of EVA and demonstrate how to finetune transformers to adapt to FlashEVA attention. The experimental results show that it can accelerate LLMs up to 6.7x with little performance drop.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper is well-written and easy to understand.\", \"The experimental results are rich.\"], \"weaknesses\": [\"I donot see any novelty in this paper. Eq.11 is derived from Eq.9 and Eq.10 in the background section and it seems that the author only define the augmented key and value set.\", \"Overall, this paper does not provide anything new, and it is more like an experimental report rather than an academic paper.\", \"I think this paper is not ready for submitting to ICLR.\"], \"questions\": \"See weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The paper \\\"FlashEVA: Accelerating LLM Inference via Efficient Attention\\\" introduces FlashEVA, an efficient implementation of Efficient Attention via Control Variates (EVA) to reduce the memory and computational demands of Transformer models during inference. FlashEVA achieves up to 6.7x throughput improvement and 5x memory reduction, enabling fine-tuning with minimal data while maintaining strong performance across general NLP tasks. It provides flexibility through hyperparameter tuning, balancing speed, accuracy, and memory usage, and demonstrates compatibility with existing optimized attention mechanisms like FlashAttention. FlashEVA underperforms on retrieval-focused tasks due to the lack of a sliding window mechanism and faces training instabilities in larger models. The paper is well organized and written. However, the technical novelty of this paper is still limited since it looks like a modification of an existing method. While the paper briefly compares FlashEVA with the Mamba model, it does not thoroughly examine their differences or provide a clear rationale for preferring FlashEVA. I think the paper could be further improved according to the suggestions for the future submission.\", \"additional_comments_on_reviewer_discussion\": \"The authors do not reply to each reviewer independently, and therefore no discussions are conducted.\"}",
"{\"summary\": [\"This paper proposes an efficient attention implementation via control variants method using custom CUDA and Triton kernels\", \"can be finetuned from transformer models with only 1.5b tokens\", \"show higher inference throughput and lower memory.\", \"suffers in retrieval focused tasks\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The motivation, and the background about different forms of attentions are clear.\", \"Extensive experiment results including different down-stream tasks.\", \"The proposed methods achieve obvious better throughputs and memory consumption comparison compared with EVA and flash attention.\", \"The proposed method can be finetuned from a standard attention models which makes it easier to use.\"], \"weaknesses\": [\"The contribution of this work is an incremental work based on EVA. It is not a new algorithm but an efficient implementation of EVA.\", \"Although the background of RFA, EVA are clearly explained, some background about flash attention could be included since it is more related and if the reader is not familiar.\", \"In addition, more details should be given about the CUDA implementation, such as pseudo code and how the custom attention mask is achieved. Current presentation about the flashEVA is too simple.\"], \"questions\": \"This method is claimed to suffer in retrieval focused tasks. I wonder what if the reason, is this due to the nature of linear attention?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
E7ecidOeCE | Selecting Influential Samples for Long Context Alignment via Homologous Models’ Guidance and Contextual Awareness Measurement | [
"Shuzheng Si",
"Haozhe Zhao",
"Gang Chen",
"Yunshui Li",
"Kangyang Luo",
"Chuancheng Lv",
"Kaikai An",
"Fanchao Qi",
"Baobao Chang",
"Maosong Sun"
] | The expansion of large language models to effectively handle instructions with extremely long contexts has yet to be fully investigated. The primary obstacle lies in constructing a high-quality long instruction-following dataset devised for long context alignment. Existing studies have attempted to scale up the available data volume by synthesizing long instruction-following samples. However, indiscriminately increasing the quantity of data without a well-defined strategy for ensuring data quality may introduce low-quality samples and restrict the final performance. To bridge this gap, we aim to address the unique challenge of long-context alignment, i.e., modeling the long-range dependencies for handling instructions and lengthy input contexts. We propose GATEAU, a novel framework designed to identify the influential and high-quality samples enriched with long-range dependency relations by utilizing crafted Homologous Models' Guidance (HMG) and Contextual Awareness Measurement (CAM). Specifically, HMG attempts to measure the difficulty of generating corresponding responses due to the long-range dependencies, using the perplexity scores of the response from two homologous models with different context windows. Also, the role of CAM is to measure the difficulty of understanding the long input contexts due to long-range dependencies by evaluating whether the model’s attention is focused on important segments. Built upon both proposed methods, we select the most challenging samples as the influential data to effectively frame the long-range dependencies, thereby achieving better performance of LLMs. Comprehensive experiments indicate that GATEAU effectively identifies samples enriched with long-range dependency relations and the model trained on these selected samples exhibits better instruction-following and long-context understanding capabilities. | [
"Long context alignment",
"Large language models",
"Data selection",
"Efficient instruction tuning"
] | https://openreview.net/pdf?id=E7ecidOeCE | https://openreview.net/forum?id=E7ecidOeCE | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wr1t8UKxf0",
"uz8jMtHfw1",
"sm1iopVwEt",
"pJLL7j7guB",
"oXkhuIdUil",
"nXlbHKldf0",
"mcFxwKyTN7",
"kicA2pIXfn",
"kWtvkiHN4M",
"jFC0rXXNvR",
"h2SBLmPdxT",
"cKX8qDxmJ3",
"b9lZvROY7p",
"ZH5tKVdkGN",
"X2qjUue4jP",
"WJ5UkfxG7K",
"VHPDFNgzr0",
"SFRP6F9luW",
"OTo1azeriV",
"NWa7YO0MyR",
"NWLDG2aH91",
"KUkOIT6x3Q",
"Ggy3306OCy",
"6dU6oy4gpq",
"4sSIQV1x87",
"47jZ3qs4lH",
"2Kb0YmM9mx",
"29k92akaLL"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1733148187858,
1733881390767,
1733309346256,
1732282476954,
1732282840146,
1733154145424,
1733309308214,
1732616330488,
1732636794074,
1733309356277,
1732281859626,
1732616269122,
1730582268516,
1732282232537,
1732283277061,
1733310023398,
1733309338427,
1732282754961,
1732282303507,
1732282657517,
1730521442075,
1732549993415,
1732281916905,
1731287882406,
1732641211099,
1732344067994,
1732616090204,
1730984533875
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission935/Reviewer_pvW2"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Reviewer_93rU"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Reviewer_KGEd"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Reviewer_93rU"
],
[
"ICLR.cc/2025/Conference/Submission935/Area_Chair_RUAE"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Reviewer_rtwW"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Authors"
],
[
"ICLR.cc/2025/Conference/Submission935/Reviewer_pvW2"
]
],
"structured_content_str": [
"{\"comment\": \"I thank the authors for providing the response. However, the main concern I have is on the rationale of choosing the base model as a reference model because. I agree that on instruction-following data this can be less of a problem, but in long-context data this can be more of an issue as the base model was not trained on long-context data and the perplexity value is less meaningful. I will keep the current score.\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Dear Reviewers:\\n\\nWe would like to withdraw our submission titled \\\"Selecting Influential Samples for Long Context Alignment via Homologous Models\\u2019 Guidance and Contextual Awareness Measurement\\\" (Paper ID: 935) from the ICLR review process. After careful consideration, we have decided to improve the content of our work according to the commnets before resubmitting it to a future venue.\", \"we_feel_somewhat_frustrated_that_we_did_not_have_a_full_discussion_with_the_reviewers_at_the_rebutaal_stage\": \"( . Still, we deeply appreciate the reviewers' time and constructive feedback, which have provided valuable insights for refining our research. Thank you for understanding.\\n\\nYours,\\nAuthors\"}",
"{\"comment\": \"Dear Reviewer,\\n\\nSince it is the last day of the discussion, we hope that you can take a look at our response. Thanks.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"title\": \"Response to Reviewer pvW2 for Weaknesses 2 and Question 1,2,3\", \"comment\": \"### **W2:**\\n> **W2:** LESS is an optimizer-based method that select a subset of instruction-following dataset by estimating data influence (selecting data points that minimizes the validation loss) and this can be as one of the baselines.\\n\\n### **Response:** \\n\\nLESS is a wonderful and interesting work in SFT data selection. The reason why we do not choose it as the baseline is that LESS needs to extract a subset of the evaluation benchmark to construct a validation set and estimate data influence on this validation set. However, the evaluation benchmarks MT-Bench and LongBench-Chat only contain 80 and 50 test data points respectively. Therefore, extracting a validation set from MT-Bench and LongBench-Chat and using the remaining data as a new test set would severely compromise the final results. This contract the LESS to make a fair comparison with our method and other baselines. We have cited this wonderful work LESS in our related work section.\\n\\n\\\\\\\\\\\\\\\\\\n\\n### **Q1:**\\n> **Q1:** For perplexity guidance (line 377), which $\\\\theta$ do you use ($\\\\theta_{A}$ or $\\\\theta_{B}$ -- I am assuming $\\\\theta_{B}$ here)?\\n> \\n\\n### **Response:** \\nYes, we use the long-context model $\\\\theta_{B}$ for baseline Perplexity Guidance. We add a more detailed description in line 377 of our paper (**highlighted in red** in the uploaded revision pdf). Thanks for your advice.\\n\\n\\\\\\\\\\\\\\\\\\n\\n### **Q2:** \\n> **Q2:** What is the context length of MT-Bench? The paper mentions that MT-Bench is for short-context instruction following. Since the proposed method, GATEAU, is designed for long-context, do you have any hypothesis on why it also improves on the short-context instruction following tasks (Table 4)?\\n> \\n\\n### **Response:** \\nIn MT-Bench, the length of the instruction does **not exceed 300 words**. We discuss this interesting phenomenon in **line 448** of our paper, we conjecture that handling complex tasks (i.e., long input contexts) contributes to handling the easy ones (i.e., short input contexts).\\n\\n\\\\\\\\\\\\\\n\\n### **Q3:** \\n> **Q3:** For Table 5, current bold numbers are a bit misleading. Usually bold numbers indicate the highest numbers across some category but here it seems the proposed method is bold. Also, for 13B models, it seem the w/o HMG and w/o CAM settings are not reported. Is there a particular reason of not doing so (e.g., computational constraint)? The ablation study on 7B model does show the effectiveness.\\n> \\n\\n### **Response:** \\nSorry for the misunderstanding due to our typos in the submission version. We actually want to use bold numbers to indicate the **highest numbers** across methods that use the same ratio of long SFT data. **Now we have modified our paper to correctly show the experimental results**. Meanwhile, as shown in line 477 in our paper, we want to explore whether our method GATEAU can fit in larger LLMs in Table 5. The 13B model (GATEAU-LLaMA-13B) shows consistent improvements on three benchmarks. This indicates that GATEAU scales effectively to larger-scale models. For your concerns, we further conducted the **additional ablation study** for 13B models in our revised paper (**highlighted in red** in the uploaded revision pdf). This indicates the effectiveness of GATEAU and using both two methods can further improve the overall performance as they separately measure the difficulty of generating corresponding responses and understanding long input contexts due to the long-range dependencies.\\n\\nWe sincerely appreciate your careful review and valuable feedback, which have significantly contributed to the improvement of our paper.\"}",
"{\"title\": \"Response to Reviewer 93rU (1/N)\", \"comment\": \"Thanks for your valuable review and suggestions! It is encouraging to see you find our methodology well-motivated and effective.\\n\\nWe sincerely thank you for your time and constructive comments. Below, we provide detailed replies to your comments to resolve your concerns.\\n\\n### **W1 and Q1:** \\n\\n> **W1:** The proposed method may assign similar high scores to duplicate or highly similar samples, assuming they contribute independently to model improvement. However, repeated exposure to similar samples may not add incremental value and could even undermine the alignment process of LLMs.\\n> \\n> \\n> **Q1:** How does the proposed framework handle potentially redundant or highly similar samples? Could repeated high-scoring samples lead to an overrepresentation of certain types of data, thus limiting the diversity and richness of long-context dependencies in the selected dataset?\\n> \\n\\n### **Response:** \\nThis is an interesting question. In our long SFT data selection process, we have partially considered such sample redundancy. In particular, HMG and CAM separately measure the difficulty of generating corresponding responses and understanding long input contexts due to the long-range dependencies, thus the final score derived from two different perspectives inherently reduces the influence of redundant samples. **As shown in Table 7 in the Appendix, our method achieves better overall performance and more balanced performance in 8 different tasks, showing the effectiveness and diversity of selected samples by GATEAU.**\\n\\nMeanwhile, to further explore whether our method inherently reduces the influence of redundant samples, we employ the cluster model as [1] to cluster all candidate instruction pairs into k clusters. Specifically, we employ the k-Means algorithm and a sentence transformers model which is used to map sentences to a 384-dimensional dense vector space. Subsequently, semantic features are PCA-reduced to retain 95% of dimensions. Finally, by setting the number of clusters as $k = \\\\sqrt{n/2}$ for $n$ long SFT samples, all 10k long SFT samples are clustered into 70 clusters. Finally, all samples are sorted based on their scores according to Eq. (6), and the top $n_1$ samples are selected. Within each cluster, samples are sorted by score, and the top $n_2$ pairs are chosen. We set $n_2$ to 1, which is the same as [1]. Finally, we can get $n_1 + k * n_2$ (i.e., $4300 + 70 * 1$) samples and use these selected data to train the model, namely -w Diversity-preserved Selection. We report the results of GATEAU-LLaMA - 50\\\\% on LongBench-Chat and MT-Bench in two settings.\\n\\n| Method | **LongBench-Chat** | **MT-Bench** |\\n| --- | --- | --- |\\n| GATEAU-LLaMA - 50% in Real-world Settings | **56.8** | 57.3 |\\n| -w Diversity-preserved Selection | 56.2 | **57.8** |\\n| GATEAU-LLaMA - 50% in Limited Short Instruction Data Settings | 59.0 | **54.2** |\\n| -w Diversity-preserved Selection | **59.2** | 53.4 |\\n\\n**In this table, we find that using the Diversity-preserved Selection does not consistently improve the final performance, showing our proposed GATEAU has partially addressed the sample redundancy and implicitly ensured the diversity of selected long SFT data.**\\n\\n[1] Clustering and ranking: Diversity-preserved instruction selection through expert-aligned quality estimation. EMNLP 2024\"}",
"{\"comment\": \"Dear Reviewer pvW2,\\n\\nThank you for your valuable feedback. We would like to provide some additional explanations and details to clarify any misunderstandings between us.\\n\\n**<1> Is the perplexity score from the short-context model $\\\\theta_{A}$ really so high that it cannot accurately measure the difficulty?** We further calculate the average perplexity value generated by the short-context LLM $\\\\theta_{A}$ for the entire long SFT dataset during the whole HMG process, which is **3.72**. This is because we expand the base frequency of the RoPE position encoding by 200 times (from 10,000 to 2,000,000) to extend the context windows and avoid the model conducting extreme perplexity score (e.g., >1,000) in Homologous Models\\u2019 Guidance, detailed in lines 360-366 of our paper.\\n\\n\\n\\n**<2> What Will Happen if We Do not Extend the Context Windows of LLaMA-2-base-4k?** We further discuss this question in lines 1133-1153 of our paper. As shown in Table 8, we are surprised to find that -w/o Extended Context Windows also achieves competitive results in three benchmarks compared to GATEAU-LLaMA. Even the perplexity score from the short-context model can be very large, e.g., the value can be larger than 1000, the value after softmax normalization is still useful and applicable in the Homologous Models\\u2019 Guidance. This interesting finding can be used to reduce the complexity of applying Homologous Models\\u2019 Guidance and achieve competitive performance.\\n\\n\\nWe hope our response has addressed your concerns, and we look forward to your further feedback.\"}",
"{\"comment\": \"Dear Reviewer,\\n\\nSince it is the last day of the discussion, we hope that you can take a look at our response. Thanks.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"comment\": \"Dear Reviewer 93rU,\\n\\nWe would like to thank you again for your detailed reviews. We have updated our draft and added replies to your concerns with our latest experimental results.\\n\\nSince the rebuttal deadline is approaching soon. Given that your current score is 5, we would appreciate it if you could let us know if our responses have addressed your concerns satisfactorily. If your concerns have not been resolved, could you please let us know about it so that we have the opportunity to respond before the deadline?\\n\\nWe would be happy to have any follow-up discussions or address any additional concerns.\\n\\nThanks very much! Looking forward to your reply.\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"Acknowledgement\", \"comment\": \"This message acknowledges the authors' response. Regarding the first question, it seems the repetition has a mixed impact on the SFT performance, which may have significant impacts. Besides, this question also raise the concern on performance degradation on other general LLM capabilities. It is suggested to have further investigation into this interesting pattern. Regarding the second question, it seems the authors didn't directly answer the question on large-scale data. Based on current response, I choose to maintain current score.\"}",
"{\"comment\": \"Dear Reviewer,\\n\\nSince it is the last day of the discussion, we hope that you can take a look at our response. Thanks.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"title\": \"Response to Reviewer rtwW (1/N)\", \"comment\": \"Thanks for your valuable review and suggestions! It is encouraging to see you find our paper truly makes sense, and that the proposed method is simple, motivated, and effective.\\n\\nWe sincerely thank you for your time and constructive comments. Below, we provide detailed replies to your comments to resolve your concerns.\\n\\n### **W1, W2 and W3:**\\n> \\n> \\n> **W1:** In Eq.(2), the explanations for the new notation \\\\theta _{A} and $\\\\theta _{B}$ should follow immediately after their appearance in Eq.(2).\\n> \\n> **W2:** In Tables 1-4, the captions should be put at the top (instead of the bottom) of the table.\\n> \\n> **W3:** In section 4.2 Impact of GATEAU, the paper first analyzes the experimental results in Tables 2 and 4 (line 413), and then analyzed the results in Tables 1 and 3 (line 429). It will be better to analyze the experimental results according to the order of Tables.\\n> \\n\\n### **Response:** \\nThanks again for your detailed suggestions in our presentation. We have modified our paper in the uploaded revision pdf to make the organization clearer (**highlighted in red** in the uploaded revision pdf).\\n\\n\\\\\\\\\\\\\\\\\\n\\n### **W4 and Q1:**\\n> \\n> \\n> **W4:** The memory storage and running time of the proposed algorithm is missing, which in my opinion can help readers understand the proposed algorithms more comprehensively. For example, in Eq.(2), HMP model uses short context model $\\\\theta _{A}$ and long context model $\\\\theta _{B}$ to compute the perplexity distance, so does the proposed algorithm have more model parameters (e.g. of $\\\\theta _{A}$ and $\\\\theta _{B}$ ) than the existing algorithm (e.g. only has $\\\\theta _{B}$)? Therefore, I would suggest to add a table to compare the memory burden as well as the execution time with the existing methods on long context alignment.\\n> \\n> **Q1:** Could you give some comparisons of memory burden between the proposed algorithms and other exiting methods?\\n\\n### **Response:**\\n\\nThanks for your advice, we believe adding such details will help our paper be more comprehensive.\\n\\n<1> **Our Experimental Device**: Firstly, we want to explain the experimental device. As shown in Appendix A, all experiments are conducted on 8xA800 80G GPUs (the experiments are not limited to this type of GPU).\\n\\n<2> **GPU** **Execution Time:** Based on the principle of making full use of GPU devices (e.g., using a multi-processing strategy and choosing large batch sizes, etc, we list the execution time in the following table:\\n\\n| **Stage** | **Execution Time** |\\n| --- | --- |\\n| Training on the full dataset in the real-world setting | ~176 GPU hours |\\n| Selecting long SFT data via HMG | ~64 GPU hours |\\n| Selecting long SFT data via CAM | ~48 GPU hours |\\n| Selecting long SFT data via Cherry Selection | ~80 GPU hours |\\n| Selecting long SFT data via Perplexity Guidance | ~32 GPU hours |\\n\\nAs shown in this table, we can find our method (HMG + CAM) introduces acceptable offline time overhead compared to the supervised fine-tuning stage and improves the overall performance of long-context LLMs. Perplexity Guidance applies a single LLM to compute the score, thus it achieves less execution time but worse performance in our experiments. Meanwhile, another strong baseline Cherry Selection introduces an additional training stage and computes the proposed Instruction-Following Difficulty (IFD) by applying the forward propagation twice on a single long SFT data, thus necessitating more execution time compared to our proposed HMG. Meanwhile, our CAM and HMG can process the data in parallel to further decrease the execution time, e.g., only 8 hours with 16xA800 80G GPUs. **Overall, compared to other baselines, the experimental results of our proposed GATEAU (consists of HMG and CAM) demonstrate that the additional execution time is worthwhile.**\\n\\n<3> **GPU** **Memory Burden:** As our method is designed to score the long SFT data, and then select the influential samples used for the SFT stage, thus our method does not introduce additional memory burden during the supervised fine-tuning and inference stage of the long-context model $\\\\theta_{B}$. For your concerns about HMG, we compute perplexity scores generated from two models $\\\\theta_{A}$ and $\\\\theta_{B}$ for a given SFT data in parallel, and use the computed perplexity scores (cached in JSON files) to get the homologous models\\u2019 perplexity score HMP as shown in Eq. (2). **Thus HMG does not introduce additional GPU memory burden, only introducing acceptable additional execution time as shown in Execution Time Table**. The GPU memory requirements of CAM tem from the calculation of the attention scores for lengthy inputs, as well as the perplexity score computation. **This process is equivalent to performing two forward passes over the dataset without updating gradients, thus it does not add an extra GPU memory burden.**\"}",
"{\"comment\": \"Dear Reviewer KGEd,\\n\\nWe would like to thank you again for your detailed reviews. We have updated our draft and added replies to your concerns with our latest experimental results.\\n\\nSince the rebuttal deadline is approaching soon. Given that your current score is 5, we would appreciate it if you could let us know if our responses have addressed your concerns satisfactorily. If your concerns have not been resolved, could you please let us know about it so that we have the opportunity to respond before the deadline?\\n\\nWe would be happy to have any follow-up discussions or address any additional concerns.\\n\\nThanks very much! Looking forward to your reply.\\n\\nBest,\\n\\nAuthors\"}",
"{\"summary\": \"The paper introduces GATEAU, a framework aimed at improving the alignment of large language models with long-context data by identifying influential samples. GATEAU leverages two main components: Homologous Models\\u2019 Guidance, which assesses the difficulty of generating responses based on perplexity differences between models with varying context windows, and Contextual Awareness Measurement, which evaluates whether models focus on crucial segments in long input contexts. Through these mechanisms, GATEAU selectively curates samples to enhance LLMs' ability to follow instructions with long-range dependencies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses an important challenge of optimizing data selection for long-context alignment, enhancing LLMs' performance in real-world applications that require handling long, complex contexts.\", \"The paper provides comprehensive evaluations across multiple tasks and varied compression ratios, which helps illustrate the model\\u2019s versatility and effectiveness.\", \"The methodology for Homologous Models\\u2019 Guidance is well-motivated. Leveraging LLM collaboration for data selection is interesting.\", \"Ablation study and human evaluation are conducted to verify the effectiveness of the method.\"], \"weaknesses\": [\"The study only tests on the LLaMA2 family, which may restrict the generalizability of findings. Additionally, HMG relies on perplexity differences between homologous models with different context windows, leaving unclear guidance on applying the technique to other models lacking such variants.\", \"There is no analysis of the time efficiency of the data selection method. Since CAM requires estimating importance scores for each segment, which may be computationally expensive, time efficiency is a critical factor.\", \"The CAM module\\u2019s process for calculating attention weights is vague, particularly in obtaining the attention weight across tokens in response $y$ to token $t_j$. Including pseudo-code or further clarification would make the implementation more accessible.\", \"The method of dividing long inputs into equal segments may overlook the fact that important information could span multiple segments. Additionally, segment importance can depend conditionally on other segments, an aspect not accounted for in the current approach.\", \"The method averages attention across heads and layers without distinguishing them, potentially introducing noise, as different heads or layers may capture varied aspects of the input [1]. Calibrating the attention aggregation could improve focus on relevant input sections.\", \"Key hyperparameters, such as the number of segments, remain unexplored, missing potential for optimization in specific contexts.\", \"[1] Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. ICLR 2024\"], \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer pvW2 for Weaknesses 1 (1/ 2)\", \"comment\": \"Thanks for your valuable review and constructive comments. Below, we provide detailed replies to your comments to resolve your concerns.\\n\\n### **W1:**\\n\\n> **W1:** The method proposed heavily depends on the perplexity that measures the similarity between the base model's answer and the desired answer. The model $\\\\theta_{A}$ (e.g., LLaMA-2-7B-base-64k as mentioned in the paper) in the method that calculates the perplexity is a base model (model that only does completion). It does not make sense to measure the perplexity of a base model on an instruction-following dataset because the base model wasn't trained to follow instruction. In this case, the perplexity can be high and it does not accurately measures the difficulty of a document. The author also acknowledges this and in the first metric (HMG) uses a homologous short context model $\\\\theta_{B}$ as a reference model and calculates the difference between PPL of $\\\\theta_{A}$ and $\\\\theta_{B}$. The author claims that this \\\"mitigate the influence brought by lacking other capabilities\\\" (e.g., instruction-following capability, long-context capability). However, there is no guarantee that lacking other capabilities contribute equally in increasing PPL of two models. Essentially the data where the models measure the PPL on is out-of-distribution data (both models were not trained on the instruction-following data and $\\\\theta_{B}$, which is extended to a longer-context window using zero-shot method, was not trained on long-context data) and model's behavior is unpredictable. Plus, $\\\\theta_{A}$ was obtained by continual pretraining on long-context data. While HMG assumes $\\\\theta_{A}$ and $\\\\theta_{B}$ are similar models, it also depend on what the continual pretraining dataset is (here the method implicitly assumes that the additional dataset is small). If the additional dataset is very large, $\\\\theta_{A}$ and $\\\\theta_{B}$ can perform very differently.\\n> \\n\\n### **Response:** \\nThanks for your valuable questions. We want to clarify the following to make our paper sound more sound.\", \"for_your_concerns_about_hmg_and_cam\": \"><1> **Is the perplexity score from the base model really so high that it cannot accurately measure the difficulty?**\\n\\nIntuitively, since the base model performs well on conditional generation tasks (e.g., continuation), it should also be able to **generate accurate perplexity scores** on the response of instruction-following data, even though the model might not be able to produce high-quality responses correctly, because these two capabilities are not the same. Therefore, we explore whether our long-text LLM $\\\\theta_{B}$ would produce incorrect perplexity values. We calculate the average perplexity value generated by the long-text LLM $\\\\theta_{B}$ for the entire long SFT dataset during the whole HMG process, which is **2.61**. Therefore, there is no issue of the perplexity from the base model being too high to accurately measure the difficulty.\\n\\n><2> **Can the perplexity score generated from the base model be used as guidance to select influential samples?**\\n\\nThe perplexity (PPL) of the responses computed with the base model is an intuitive metric, as it measures the difficulty of the data sample during the generation. In our experiments, we find simply using high perplexity (namely Perplexity Guidance in our paper) can also improve the performance compared with using the whole long SFT dataset, indicating that the effectiveness of the perplexity score from the base model in selecting long SFT samples. Previous work [1] also finds using the Instruction-Following Difficulty (a variant of perplexity score) computed with the base model also works in selecting SFT samples. **According to these experiments, we believe that the perplexity generated from a base model can be used as positive guidance to select SFT samples.** Therefore, the use of the perplexity score generated from the base model in our method makes sense when selecting long SFT data. Meanwhile, our method HMG is designed to minimize the influence of other factors (e.g., the limited instruction-following ability of a base model) and model the difficulty in modeling the long-range dependencies to construct the more effective guidance of long SFT data selection, and further improve overall performance. For CAM, utilizing perplexity scores to compute importance scores is also reasonable, and the experiments show improvement even when only using CAM.\\n\\n[1] From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning. NAACL 2024\"}",
"{\"title\": \"Response to Reviewer 93rU (2/N)\", \"comment\": \"### **W2 and Q2:**\\n> **W2:** The paper does not discuss the computational resources required to implement the GATEAU framework. A selection method that is slow or computationally prohibitive is less feasible in practical scenarios, especially for extremely long instruction-following examples.\\n>\\n> **Q2:** Would this approach remain feasible for datasets of larger scale, and what are the expected costs in terms of time and resources?\\n### **Response:** \\n\\nThanks for your advice, we believe adding such details will help our paper be more comprehensive.\\n\\n<1> **Experimental Device**: As shown in the Appendix A, experiments are conducted on 8xA800 80G GPUs (experiments are not limited to this type of GPU).\\n\\n<2> **GPU Execution Time:** Based on the principle of making full use of GPUs (e.g., multi-processing and selecting large batch sizes, etc), we list the execution time in following table:\\n\\n| **Stage** | **Execution Time** |\\n| --- | --- |\\n| Training on the full dataset in the real-world setting | ~176 GPU hours |\\n| Selecting long SFT data via HMG | ~64 GPU hours |\\n| Selecting long SFT data via CAM | ~48 GPU hours |\\n| Selecting long SFT data via Cherry Selection | ~80 GPU hours |\\n| Selecting long SFT data via Perplexity Guidance | ~32 GPU hours |\\n\\nAs shown in this table, we can find our method (HMG + CAM) introduces acceptable offline time overhead compared to the supervised fine-tuning stage and improves the overall performance of long-context LLMs. Perplexity Guidance applies a single LLM to compute the score, thus it achieves less execution time but worse performance in our experiments. Meanwhile, another strong baseline Cherry Selection introduces an additional training stage and computes the proposed Instruction-Following Difficulty (IFD) by applying the forward propagation twice on a single long SFT data, thus necessitating more execution time compared to our proposed HMG. Meanwhile, our CAM and HMG can process the data in parallel to further decrease the execution time, e.g., only 8 hours with 16xA800 80G GPUs. **Overall, compared to other baselines, the experimental results of our proposed GATEAU (consists of HMG and CAM) demonstrate that the additional execution time is worthwhile.**\\n\\n<3> **GPU Memory Burden:** As our method is designed to score the long SFT data, and then select the influential samples used for the SFT stage, thus our method does not introduce additional memory burden during the supervised fine-tuning and inference stage of the long-context model $\\\\theta_{B}$. For your concerns about HMG, we compute perplexity scores generated from two models $\\\\theta_{A}$ and $\\\\theta_{B}$ for a given SFT data in parallel, and use the computed perplexity scores (cached in JSON files) to get the homologous models\\u2019 perplexity score HMP as shown in Eq. (2). **Thus HMG does not introduce additional GPU memory burden, only introducing acceptable additional execution time as shown in Execution Time Table**. The GPU memory requirements of CAM tem from the calculation of the attention scores for lengthy inputs, as well as the perplexity score computation. **This process is equivalent to performing two forward passes over the dataset without updating gradients, thus it does not add an extra GPU memory burden.**\\n\\n\\\\\\\\\\\\\\\\\\n\\n### **W3:** \\n> **W3:** Although the paper includes ablation studies, it lacks an experiment testing the effectiveness of using a single model's perplexity (e.g., LLaMA-2-7B-base-64k) as the sole criterion for data selection. Adding this experiment would provide valuable insight into the authors' claim on page 4, line 174, that high perplexity alone does not adequately reflect response difficulty in long-context scenarios.\\n### **Response:** \\nThank you for your suggestion! **Actually, we have already compared this important baseline, namely Perplexity Guidance in our paper, in our comprehensive experiments (e.g., Table 1, Table 2, Table 3, Table 4, and Table 8 in the Appendix).** We find that while using a single model's perplexity can be used as weak but positive guidance to select SFT samples compared to using the full long SFT dataset, our proposed GATEAU achieved consistently better performance, showing that high perplexity alone does not adequately reflect response difficulty in long-context scenarios.\\n\\n\\\\\\\\\\\\\\\\\\n\\n### **Q4:** \\n> **Q4:** Is there some display problem in Figure 2?\\n### **Response:** \\nFigure 2 illustrates the \\\"Needle in the Haystack\\\" test, which evaluates the model\\u2019s ability to utilize information from 10 different positions within long contexts of varying lengths (1k\\u201360k). The results are displayed using different colors. The green color across Figure 2 indicates that the model with GATEAU successfully passes all settings, resulting in a **uniform green display**. This comparison highlights the effectiveness of our method.\\n\\n\\\\\\\\\\\\\\\\\\n\\nWe hope this detailed explanation successfully addresses your concern. Your advice has significantly contributed to the quality of our paper.\"}",
"{\"title\": \"Request for Reviewers to Participate in the Discussion\", \"comment\": \"Dear Reviewers,\\n\\nAs today is the last day of the discussion, we would like you to take a look at our responses and kindly reconsider the ratings. Thanks.\\n\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"comment\": \"Dear Reviewer,\\n\\nSince it is the last day of the discussion, we hope that you can take a look at our response. Thanks.\\n\\nBest regards,\\n\\nThe Authors\"}",
"{\"title\": \"Response to Reviewer KGEd (2/N)\", \"comment\": \"### **W4 and W6:**\\n\\n> **W4:** The method of dividing long inputs into equal segments may overlook the fact that important information could span multiple segments. Additionally, segment importance can depend conditionally on other segments, an aspect not accounted for in the current approach.\\n> \\n> \\n> **W6:** Key hyperparameters, such as the number of segments, remain unexplored, missing potential for optimization in specific contexts.\\n> \\n\\n### **Response:** \\n\\nOur CAM aims to evaluate whether LLMs\\u2019 attention is appropriately focused on important segments within the long input contexts. We separately compute the designed important score for each segment to the given response and calculate the LLM\\u2019s attention weights on each segment. In this way, even if the important information could span multiple segments, our method still calculates scores for several different segments to get the final CAS. **As shown in our experiments, our method achieves better performance in Multi-doc QA and long-context Summarization, showing that our method can effectively handle the scenario that important information could span multiple segments**. Meanwhile, Therefore, we further explore the impact of the important hyperparameter segment length. Intuitively, an excessively large segment length tends to prevent the model from focusing on the fine-grained information within the segment, whereas an excessively small segment length can lead to more semantically incoherent segments. We report the results of GATEAU-LLaMA - 50\\\\% on LongBench-Chat in Real-world Settings.\\n\\n| **Length of Segment** | LongBench-Chat |\\n| --- | --- |\\n| 64 | 55.2 |\\n| **128 (reported in our paper)** | **56.8** |\\n| 256 | 56.2 |\\n| 512 | 56.4 |\\n| 1024 | 54.4 |\\n| 2048 | 53.6 |\\n| Full - 100% | 48.8 |\\n| -w/o CAM | 53.2 |\\n\\nAs shown in this table, different segment lengths affect the model's performance; however, as long as a reasonable length value is chosen, **the fluctuations in model performance are not significant**. Meanwhile, the performance will always be improved over using the whole long SFT dataset (namely Full-100%) and only using the HMG method (namely -w/o CAM), showing the effectiveness of our proposed CAM.\\n\\n\\\\\\\\\\\\\\\\\\n\\n### **W5:** \\n> **W5:** The method averages attention across heads and layers without distinguishing them, potentially introducing noise, as different heads or layers may capture varied aspects of the input. Calibrating the attention aggregation could improve focus on relevant input sections.\\n> \\n\\n### **Response:** \\n\\nThis is an interesting point. Inspired by previous work[2], we harness the attention weights averaged across different decoder layers and attention heads to thoroughly model how the LLM utilizes long input contexts during response generation. In this way, as shown in our experiments, our method achieves consistently better performance. Meanwhile, choosing the optimal head and layer for long context alignment may require additional validation data and computational resources.\\n\\n[2] Found in the middle: Calibrating positional attention bias improves long context utilization. ACL Findings 2024\\n\\n\\\\\\\\\\\\\\n\\nWe trust that this additional information addresses your concerns, and we welcome any further inquiries or feedback you may have. We also hope that you can kindly increase your score if our response has helped address your concerns.\"}",
"{\"title\": \"Continue Response to Reviewer pvW2 for Weaknesses 1 (1/ 2)\", \"comment\": \"### **Response:**\\n\\n> **<3> Additional experiments to analyze the perplexity score generated from the base model.**\\n\\nWe further conduct additional experiments to explore the effect of perplexity scores generated from the base model. In HMG, we use in-context learning technology to align the base model and use the perplexity score from the aligned model to select long SFT data. Specifically, we use the same 3 demonstration examples as URIAL [2]. In this way, we can get models more aligned without updating the parameters.\\n\\n| Method | **LongBench-Chat** | **MT-Bench** |\\n| --- | --- | --- |\\n| GATEAU-LLaMA - 50% in Real-world Settings | **56.8** | 57.3 |\\n| w/ ICL Alignment | 56.2 | **57.9** |\\n| GATEAU-LLaMA - 50% in Limited Short Instruction Data Settings | 59.0 | **54.2** |\\n| w/ ICL Alignment | **59.4** | 53.5 |\\n\\nHowever, as shown in this table, using the aligned model via in-context learning does not consistently improve the final performance. **This indicates that using only base models in the HMG phase can also achieve good results.** Therefore, HMG can effectively minimize the influence of other factors (e.g., the limited instruction-following ability of a base model) and model the difficulty in modeling the long-range dependencies.\\n\\n> **<4> Is the continual pre-training dataset significantly smaller than the pre-training dataset?**\\n\\nIn our method and experiment, the LLaMA-2-7B-base-64k conducts a post-training stage on a total of **10 billion** tokens to extend the context windows. Compared to the pretraining stage of LLaMA-2-7B-base-64k and LLaMA-2-7B-base-4k utilizing about **2 trillion** tokens. **Thus, in our method and experiment, the continual pre-training dataset is significantly smaller than the pre-training dataset and makes long-context LLM $\\\\theta_{B}$ and short-context LLM $\\\\theta_{A}$ have similar other abilities**. Meanwhile, as suggested by previous work[3], continual training on 10B tokens is sufficient for context extension. Thus, in other LLMs, the continual pre-training dataset is always significantly smaller than the pre-training dataset (e.g., LlaMA-3 is pre-trained on a corpus of about 15T tokens).\\n\\n[2] The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning. ICLR 2024\\n\\n[3] Understanding data influence on context scaling. Yao Fu\\u2019s Notion 2023.\"}",
"{\"title\": \"Response to Reviewer KGEd (1/N)\", \"comment\": \"Thanks for your valuable review and suggestions! It is encouraging to see you find our methodology well-motivated and interesting.\\n\\nWe sincerely thank you for your time and constructive comments. Below, we provide detailed replies to your comments to resolve your concerns.\\n\\n### **W1:** \\n> **W1:** The study only tests on the LLaMA2 family, which may restrict the generalizability of findings. Additionally, HMG relies on perplexity differences between homologous models with different context windows, leaving unclear guidance on applying the technique to other models lacking such variants.\\n> \\n\\n### **Response:** \\nThanks for your interest! Our HMG method indeed requires two homologous models with different context windows, thus limiting the range of models we can use to further conduct the experiments. However, in practical scenarios, training a powerful long-context LLM always involves homologous models with different context windows (though these models may not be open-sourced). This is because existing LLMs are often initially pre-trained on a large-scale corpus with smaller context windows due to device limitations, they then conduct continual pre-training to extend the window size. Therefore, our method still remains effective in real-world scenarios.\\n\\n\\\\\\\\\\\\\\\\\\n\\n\\n### **W2:** \\n> **W2:** There is no analysis of the time efficiency of the data selection method. Since CAM requires estimating importance scores for each segment, which may be computationally expensive, time efficiency is a critical factor.\\n> \\n\\n### **Response:** \\nThanks for your advice, we believe adding such details will help our paper be more comprehensive.\\n\\n<1> **Our Experimental Device**: Firstly, we want to explain the experimental device. As shown in Appendix A, all experiments are conducted on 8xA800 80G GPUs (the experiments are not limited to this type of GPU).\\n\\n<2> **GPU** **Execution Time:** Based on the principle of making full use of GPU devices (e.g., using a multi-processing strategy and choosing large batch sizes, etc, we list the execution time in the following table:\\n\\n| **Stage** | **Execution Time** |\\n| --- | --- |\\n| Training on the full dataset in the real-world setting | ~176 GPU hours |\\n| Selecting long SFT data via HMG | ~64 GPU hours |\\n| Selecting long SFT data via CAM | ~48 GPU hours |\\n| Selecting long SFT data via Cherry Selection | ~80 GPU hours |\\n| Selecting long SFT data via Perplexity Guidance | ~32 GPU hours |\\n\\nAs shown in this table, we can find our method (HMG + CAM) introduces acceptable offline time overhead compared to the supervised fine-tuning stage and improves the overall performance of long-context LLMs. Perplexity Guidance applies a single LLM to compute the score, thus it achieves less execution time but worse performance in our experiments. Meanwhile, another strong baseline Cherry Selection introduces an additional training stage and computes the proposed Instruction-Following Difficulty (IFD) by applying the forward propagation twice on a single long SFT data, thus necessitating more execution time compared to our proposed HMG. Meanwhile, our CAM and HMG can process the data in parallel to further decrease the execution time, e.g., only 8 hours with 16xA800 80G GPUs. **Overall, compared to other baselines, the experimental results of our proposed GATEAU (consists of HMG and CAM) demonstrate that the additional execution time is worthwhile.**\\n\\n\\\\\\\\\\\\\\\\\\n\\n### **W3:** \\n> **W3:** The CAM module\\u2019s process for calculating attention weights is vague, particularly in obtaining the attention weight across tokens in response y to token tj. Including pseudo-code or further clarification would make the implementation more accessible.\\n> \\n\\n### **Response:** \\n\\nSorry for the misunderstanding, we first compute attention weights for each token in response $y$ to the token $t_j$ in the segment $s_i$. Then we average the attention weights from each token in response $y$ to get the score $Attn_{\\\\theta}(t_{j}|y;\\\\theta)$ in Eq. (4), then compute the $Attn_{\\\\theta}(s_i)$ according to Eq. (4). Meanwhile, we harness the attention weights averaged across different decoder layers and attention heads to thoroughly model how the LLM utilizes the long input contexts according to [1].\\n\\n[1] Found in the middle: Calibrating positional attention bias improves long context utilization. ACL Findings 2024\"}",
"{\"summary\": \"This paper introduces GATEAU, a data selection framework designed to identify critical instruction samples for long context alignment of LLMs. Specifically, GATEAU consists of two core modules: Homologous Models' Guidance (HMG) and Contextual Awareness Measurement (CAM). HMG assesses the difficulty of generating responses due to long-range dependencies by comparing the perplexity scores of homologous models with varying context windows. CAM measures how effectively a model focuses on relevant segments of extended input. The outputs of HMG and CAM are combined to produce a final ranking criterion for data selection. Extensive experiments indicate that the proposed method significantly enhances the long-context instruction-following capabilities of LLMs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well motivated. Identifying high-quality long instruction-following samples is essential for improving long context alignment.\\n2. The proposed GATEAU aligns naturally with the challenges of long-range dependency modeling, making the methodology intuitive and easy to understand.\\n3. The authors demonstrate the effectiveness of GATEAU through extensive experiments across multiple benchmarks. The results consistently show that the selected samples significantly improve model performance on both long and short instruction-following tasks.\", \"weaknesses\": \"1. The proposed method may assign similar high scores to duplicate or highly similar samples, assuming they contribute independently to model improvement. However, repeated exposure to similar samples may not add incremental value and could even undermine the alignment process of LLMs.\\n2. The paper does not discuss the computational resources required to implement the GATEAU framework. A selection method that is slow or computationally prohibitive is less feasible in practical scenarios, especially for extremely long instruction-following examples.\\n3. Although the paper includes ablation studies, it lacks an experiment testing the effectiveness of using a single model's perplexity (e.g., LLaMA-2-7B-base-64k) as the sole criterion for data selection. Adding this experiment would provide valuable insight into the authors' claim on page 4, line 174, that high perplexity alone does not adequately reflect response difficulty in long-context scenarios.\", \"questions\": \"1. How does the proposed framework handle potentially redundant or highly similar samples? Could repeated high-scoring samples lead to an overrepresentation of certain types of data, thus limiting the diversity and richness of long-context dependencies in the selected dataset?\\n2. Would this approach remain feasible for datasets of larger scale, and what are the expected costs in terms of time and resources?\\n3. Is there some display problem in Figure 2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Please participate in the discussion with the authors\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your efforts and contribution to ICLR! The authors have posted their responses to your original comments. Only less than two days are left for the reviewer-author discussion. Given the current borderline ratings, your help and prompt responses are important. Please actively check the authors' responses and participate in the discussion. \\n\\nThanks!\\n\\nBest regards,\\nYour AC\"}",
"{\"title\": \"Response to Reviewer rtwW (2 /N)\", \"comment\": \"### **W5 and Q2:**\\n> \\n> \\n> **W5:** It is a surprise for me that in Table 1 and 3, the proposed algorithm with less dataset can achieve SOTA results in all settings. So, can you give some example (whether simulation or real-world dataset, or theoretical analysis) to explain in what kind of situation the proposed algorithm may fail to achieve SOTA results?\\n> \\n> **Q2:** Could you give some examples to show the failure of the proposed algorithm? It is a little unbelievable that one algorithm can achieve the best performance on all datasets, especially with less training data than existing ones.\\n> \\n\\n### **Response:** \\nThanks for your interest in the performance of our method. Previous work [1] suggests that data quality is more important than data quantity and shows that instruction tuning 1\\\\% selected high-quality SFT data can outperform the method employing the entire dataset. As mentioned in our paper, previous works attempt to scale up the available data volume by synthesizing long instruction-following samples. However, the absence of a clear strategy for ensuring data quality may lead to the inclusion of low-quality samples. Thus, it is predictable that better performance can be achieved by using fewer but high-quality long SFT data.\\n\\nRegarding your question, one possible limitation of our method is that our method is designed to improve overall performance in instruction-following and long-context understanding tasks. However, it is not suitable for improving performance in a targeted capability **or task, e.g., only improving the performance of mathematical questions. As shown in Table 7 in the Appendix, our proposed method does not consistently improve the performance across all the different capabilities, e.g., our method achieves unsatisfactory performance in the role-playing task.\\n\\n[1] One-Shot Learning as Instruction Data Prospector for Large Language Models. ACL 2024\\n\\n\\\\\\\\\\\\\\\\\\n\\n### **W6 and Q3:** \\n> \\n> \\n> **W6:** If I understand correctly, in the proposed algorithm, we need to choose some part of the whole dataset (e.g. 10% or 30%) to train the model. How does this paper choose this percentage? And in real-life applications, how do we choose the part (whether 10% or 30%) from the whole dataset to align LLM?\\n> \\n> **Q3:** Could you give some insights into the setting of the utilization percentage of the whole dataset of the proposed algorithm?\\n> \\n\\n### **Response:** \\nThanks for your interesting and practical question. The ratio of selected long SFT samples is an important hyperparameter in our method. According to the results of experiments, we find our method is robust in the different ratios of used long SFT samples, including 10%, 30%, and 50%. Regarding your question, based on our comprehensive experiments in real-world settings, it is advisable to select 30% of the total data for real-life applications. When computational resources are abundant, we also recommend selecting the optimal ratio by evaluating it in your own real-life scenarios.\\n\\n\\\\\\\\\\\\\\\\\\n\\nWe hope this detailed explanation clarifies your concerns and underscores the significance of our contributions. If you have any further questions or require additional clarification, please feel free to reach out.\"}",
"{\"summary\": \"To effectively handle instructions with extremely long contexts in expansion of large language models (LLMs), this paper proposes GATEAU, a novel framework designed to identify the influential samples enriched with long-range dependency relations by utilizing crafted Homologous Models\\u2019 Guidance (HMG) and Contextual Awareness Measurement (CAM). Specifically, HMG uses the perplexity scores of the response from two homologous models to measure the ability in modeling long-range dependencies. CAM is used to measure difficulty of understanding the long input contexts by evaluating whether the model\\u2019s attention is focused on important segments. Extensive experiments on several LLM benchmarks demonstrate the effectiveness of the proposed algorithms.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"(1)\\tThis paper proposes an efficient and practical influential-sample-selecting algorithm for long context alignment of large language model. The motivation of the HMG and CAM component is clearly explained in Section 3.\\n\\n(2)\\tThe organization of this paper is clear and easy to follow. The notations used in Section 3 are all well clarified.\\n\\n(3)\\tThe experiments are truly extensive, validating the effectiveness of the proposed methods. In particular, the SOTA result in Tables 1-4 is a surprise for me that using part (e.g. 10\\\\%) of the whole dataset can achieve better performance than using the whole dataset. Ablation study in Section 4.3 also verifies the effectiveness of the components (i.e. HMG and CAM).\\n\\n(4)\\tOverall, the effort in selecting influential samples for long context alignment in LLM truly makes sense, and the proposed algorithm is simple, motivated, and effective.\", \"weaknesses\": \"(1)\\tIn Eq.(2), the explanations for the new notation $\\\\theta _{A}$ and $\\\\theta _{B$ should follow immediately after their appearance in Eq.(2).\\n\\n(2)\\tIn Tables 1-4, the captions should be put at the top (instead of the bottom) of the table.\\n\\n(3)\\tIn section 4.2 Impact of GATEAU, the paper first analyzes the experimental results in Tables 2 and 4 (line 413), and then analyzed the results in Tables 1 and 3 (line 429). It will be better to analyze the experimental results according to the order of Tables.\\n\\n(4)\\tThe memory storage and running time of the proposed algorithm is missing, which in my opinion can help readers understand the proposed algorithms more comprehensively. For example, in Eq.(2), HMP model uses short context model $\\\\theta _{A}$ and long context model $\\\\theta _{B}$ to compute the perplexity distance, so does the proposed algorithm have more model parameters (e.g. of $\\\\theta _{A}$ and $\\\\theta _{B}$ ) than the existing algorithm (e.g. only has $\\\\theta _{B}$)? Therefore, I would suggest to add a table to compare the memory burden as well as the execution time with the existing methods on long context alignment.\\n\\n(5)\\tIt is a surprise for me that in Table 1 and 3, the proposed algorithm with less dataset can achieve SOTA results in all settings. So, can you give some example (whether simulation or real-world dataset, or theoretical analysis) to explain in what kind of situation the proposed algorithm may fail to achieve SOTA results?\\n\\n(6)\\tIf I understand correctly, in the proposed algorithm, we need to choose some part of the whole dataset (e.g. 10\\\\% or 30\\\\%) to train the model. How does this paper choose this percentage? And in real-life applications, how do we choose the part (whether 10\\\\% or 30\\\\%) from the whole dataset to align LLM?\", \"questions\": \"(1)\\tCould you give some comparisons of memory burden between the proposed algorithms and other exiting methods?\\n\\n(2)\\tCould you give some examples to show the failure of the proposed algorithm? It is a little unbelievable that one algorithm can achieve the best performance on all datasets, especially with less training data than existing ones.\\n\\n(3)\\tCould you give some insights into the setting of the utilization percentage of the whole dataset of the proposed algorithm?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer 93rU,\\n\\n\\n\\nThank you for your valuable feedback. We would like to provide some additional explanations and details to clarify any misunderstandings between us.\\n\\n\\n\\n**<1> There were no data repetition issues in our experiments:** We use LongAlign[1] as the long SFT dataset in our experiments, which ensures that each data point is unique, **thus the data repetition issue you mentioned does not occur in our experiments**. Additionally, we have considered the similarity of data, i.e., the diversity of the data selected by GATEAU, details of which can be seen in <2>.\\n\\n\\n\\n**<2> Our method does not limit the diversity of selected data:** In our paper, we compare our GATEAU with the state-of-the-art method that considers data diversity, named CaR[2], and find that our method GATEAU consistently outperforms CaR. Meanwhile, as shown in Table 7 in the Appendix, our method achieves **better overall performance and more balanced performance in 8 different tasks**, showing the effectiveness and diversity of selected samples by GATEAU. In the additional experiments detailed in our response to you, we also find that integrating a module that focuses on preserving data diversity (namely -w Diversity-preserved Selection) into our existing method GATEAU did not consistently improve the final performance, **suggesting that GATEAU does not limit the diversity of the selected data.**\\n\\n\\n\\n**<3> Our method is applicable on large-scale long SFT datasets:** In our previous response, we provide detailed data demonstrating that our method does not introduce any additional GPU burdens, and only incurs an acceptable increase in execution time, making it easily applicable to large-scale long SFT dataset. **Additionally, the long SFT dataset LongAlign[1] that we used in our experiments is one of the largest available Long SFT datasets, containing 10,000 long SFT data.**\\n\\n\\n\\nWe hope our response has addressed your concerns, and we look forward to your further feedback.\\n\\n\\n[1] LongAlign: A Recipe for Long Context Alignment of Large Language Models. EMNLP 2024 Findings\\n\\n[2] Clustering and ranking: Diversity-preserved instruction selection through expert-aligned quality estimation. EMNLP 2024\"}",
"{\"title\": \"General Response to All Reviewers\", \"comment\": [\"Dear Reviewers,\", \"We thank all reviewers for their insightful comments and acknowledgment of our contributions.\", \"**We greatly appreciate your recognition of the strengths of our work as follows:**\", \"**Introduction of GATEAU**\", \"We present the GATEAU, **as a novel framework(`rtwW`)**, recognized by **all reviewers as well-motivated.**\", \"Our framework offers an **efficient and practical** algorithm (**`rtwW`**) to address an **important and essential** challenge(**`KGEd`**,**`93rU`**) for long-context alignment.\", \"**Methodological Effectiveness**\", \"Our method has been acknowledged by **all reviewers** for its **effectiveness, with a clear motivation(`rtwW`,`KGEd`),** which naturally aligns with the challenges of modeling the long-range dependency(**`93rU`**).\", \"**Comprehensive Experiments**\", \"Our Experiments are acknowledged for their **comprehensiveness** by **all reviewers**, validating the **versatility** and **effectiveness** of the proposed methods(**`KGEd`**, **`93rU`**, **`rtwW`**), and showcasing a **well-motivated and interesting**(**`KGEd`**) method for this field.\", \"**We've revised our manuscript per the reviewers' suggestions** (**highlighted in red** in the uploaded revision pdf). Detailed responses to each reviewer's concerns are carefully addressed **point-by-point**.\", \"Below summarize the **major updates** we've made:\", \"**Presentation:** We fix the typos in the uploaded revision pdf to make the organization of this paper clearer according to the suggestions from reviewers **`rtwW`** and **`pvW2`**.\", \"**Experiment:** We further conduct the following experiments to make our paper more sound and try to address reviewers' concerns.\", \"**Additional ablation study on 13B models**: (**`pvW2`**). We further conducted the **additional ablation study for 13B models** in our revised paper. This indicates the effectiveness of our proposed GATEAU.\", \"**Comparison of execution time**: (**`rtwW`,** **`pvW2`**, and **`93rU`**). We further **compare** the **execution time** of our method with other baselines. The experiment shows that our method introduces **acceptable offline time** overhead compared to the **other baselines** and the SFT stage. Meanwhile, the performance results of our proposed GATEAU (consists of HMG and CAM) demonstrate that the additional execution time is worthwhile.\", \"**Exploration of the diversity of selected samples**: (**`93rU`**). We explore the **diversity** of the samples selected by GATEAU. Results show that our proposed GATEAU has partially **addressed sample redundancy** and implicitly **ensured the diversity** of selected long SFT data.\", \"**Further exploration of HMG**:(**`pvW2`**) We further explore the designed HMG. As shown in experiments in our response to the **reviewer `pvW2`**, we find that using only base models in the HMG phase can achieve good results. It shows that **HMG can effectively minimize the influence of other factors** (e.g., the limited instruction-following ability of a base model) and model the difficulty in modeling the long-range dependencies.\", \"**Further exploration of CAM**: We explore the effect of the number of segments in CAM based on the review from **`KGEd`**. As shown in our experiment **in our response to the reviewer `KGEd`**, different segment lengths affect the model's performance; however, as long as a reasonable length value is chosen, the fluctuations in model performance are not significant. Meanwhile, the performance will always be improved over using the whole long SFT dataset and only using the HMG method, showing the effectiveness of our proposed CAM.\", \"**Explanation:** We attempt to provide the following explanation to address the misunderstandings in our paper.\", \"**GPU memory burden:** We discuss the process of our proposed GATEAU in detail, and explain our proposed GATEAU does **not add an extra GPU memory** burden.\", \"**Need for using a single model's perplexity as the sole criterion for data selection:** For the concern from reviewer **`93rU`**, we have already compared this important baseline in the submission version of our paper, namely **Perplexity Guidance**, in our comprehensive experiments (e.g., **Table 1, Table 2, Table 3, Table 4, and Table 8 in the Appendix**) to show our valuable insight, i.e., the high perplexity alone does not adequately reflect response difficulty in long-context scenarios.\", \"We believe our work could make a novel contribution to the community and offer a novel perspective on addressing the challenges of long context alignment of LLM.\", \"We would like to be involved in further discussions if any question is raised.\", \"Best,\", \"Authors.\"]}",
"{\"comment\": \"Dear Reviewer pvW2,\\n\\nWe would like to thank you again for your detailed reviews. We have updated our draft and added replies to your concerns with our latest experimental results.\\n\\nSince the rebuttal deadline is approaching soon. Given that your current score is 5, we would appreciate it if you could let us know if our responses have addressed your concerns satisfactorily. If your concerns have not been resolved, could you please let us know about it so that we have the opportunity to respond before the deadline?\\n\\nWe would be happy to have any follow-up discussions or address any additional concerns. \\n\\nLooking forward to your reply.\\n\\nBest,\\n\\nAuthors\"}",
"{\"summary\": \"This paper proposes a new data selection algorithm, GATEAU, for training long-context instruction following models. Gven a long-context instruction following dataset, the algorithm calculates two metrics: Homologous Models' Guidance (HMG) and Contextual Awareness Measurement (CAM). In particular, HMG measures the difficulty of generating responses due to long-range dependencies by by comparing perplexity scores of responses between two homologous models with different lengths of context windows. CAM measures the difficulty of utilizing important parts of the long-input by evaluating whether the impoartant segments are being utilized using the attention scores. The author measures the effectiveness of the proposed method on various benchmarks (e.g., LongBench, LongBench-Chat and MT-Bench).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The writing is clear (except some questions I have in the Questions section below) and the author explains the proposed method well with clear formulations.\\n2. The author uses different benchmarks (LongBench, LongBench-Chat, MT-bench, Needle-in-a-haystack) to show the effectiveness of the proposed method on (long/short)-context instruction following tasks.\", \"weaknesses\": \"1.\\n- The method proposed heavily depends on the perplexity that measures the similarity between the base model's answer and the desired answer. The model $\\\\theta_A$ (e.g., LLaMA-2-7B-base-64k as mentioned in the paper) in the method that calculates the perplexity is a base model (model that only does completion). It does not make sense to measure the perplexity of a base model on an instruction-following dataset because the base model wasn't trained to follow instruction. In this case, the perplexity can be high and it does not accurately measures the difficulty of a document. The author also acknowledges this and in the first metric (HMG) uses a homologous short context model $\\\\theta_B$ as a reference model and calculates the difference between PPL of $\\\\theta_A$ and $\\\\theta_B$. The author claims that this \\\"mitigate the influence brought by lacking other capabilities\\\" (e.g., instruction-following capability, long-context capability). However, there is no guarantee that lacking other capabilities contribute equally in increasing PPL of two models. Essentially the data where the models measure the PPL on is out-of-distribution data (both models were not trained on the instruction-following data and $\\\\theta_B$, which is extended to a longer-context window using zero-shot method, was not trained on long-context data) and model's behavior is unpredictable. Plus, $\\\\theta_A$ was obtained by continual pretraining on long-context data. While HMG assumes $\\\\theta_A$ and $\\\\theta_B$ are similar models, it also depend on what the continual pretraining dataset is (here the method implicitly assumes that the additional dataset is small). If the additional dataset is very large, $\\\\theta_A$ and $\\\\theta_B$ can perform very differently.\\n- Moreover, for second metric (CAM), the LLM that measures the PPL was not trained on instruction-following dataset (and therefore the behavior is unpredictable) and there is no such reference model.\\n\\n2. LESS [1] is an optimizer-based method that select a subset of instruction-following dataset by estimating data influence (selecting data points that minimizes the validation loss) and this can be as one of the baselines.\", \"minor_issues\": \"1. For Table 5, current bold numbers are a bit misleading. Usually bold numbers indicate the highest numbers across some category but here it seems the proposed method is bold. Also, for 13B models, it seem the w/o HMG and w/o CAM settings are not reported. Is there a particular reason of not doing so (e.g., computational constraint)? The ablation study on 7B model does show the effectiveness.\\n\\n[1] Xia, M., Malladi, S., Gururangan, S., Arora, S., & Chen, D. (2024). Less: Selecting influential data for targeted instruction tuning. arXiv preprint arXiv:2402.04333.\", \"questions\": \"1. For perplexity guidance (line 377), which $\\\\theta$ do you use ($\\\\theta_A$ or $\\\\theta_B$ -- I am assuming $\\\\theta_B$ here)?\\n2. What is the context length of MT-Bench? The paper mentions that MT-Bench is for short-context instruction following. Since the proposed method, GATEAU, is designed for long-context, do you have any hypothesis on why it also improves on the short-context instruction following tasks (Table 4)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
E77uvbOTtp | CFG++: Manifold-constrained Classifier Free Guidance for Diffusion Models | [
"Hyungjin Chung",
"Jeongsol Kim",
"Geon Yeong Park",
"Hyelin Nam",
"Jong Chul Ye"
] | Classifier-free guidance (CFG) is a fundamental tool in modern diffusion models for text-guided generation. Although effective, CFG has notable drawbacks. For instance, DDIM with CFG lacks invertibility, complicating image editing; furthermore, high guidance scales, essential for high-quality outputs, frequently result in issues like mode collapse. Contrary to the widespread belief that these are inherent limitations of diffusion models, this paper reveals that the problems actually stem from the off-manifold phenomenon associated with CFG, rather than the diffusion models themselves. More specifically, inspired by the recent advancements of diffusion model-based inverse problem solvers (DIS), we reformulate text-guidance as an inverse problem with a text-conditioned score matching loss and develop CFG++, a novel approach that tackles the off-manifold challenges inherent in traditional CFG. CFG++ features a surprisingly simple fix to CFG, yet it offers significant improvements, including better sample quality for text-to-image generation, invertibility, smaller guidance scales, reduced etc. Furthermore, CFG++ enables seamless interpolation between unconditional and conditional sampling at lower guidance scales, consistently outperforming traditional CFG at all scales. Moreover, CFG++ can be easily integrated into the high-order diffusion solvers and naturally extends to distilled diffusion models. Experimental results confirm that our method significantly enhances performance in text-to-image generation, DDIM inversion, editing, and solving inverse problems, suggesting a wide-ranging impact and potential applications in various fields that utilize text guidance. Project Page: https://cfgpp-diffusion.github.io/anon | [
"Diffusion models",
"Manifold",
"Classifier-free guidance"
] | Accept (Poster) | https://openreview.net/pdf?id=E77uvbOTtp | https://openreview.net/forum?id=E77uvbOTtp | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xhDDmyaJgy",
"u0AN13ex9J",
"s3WA8PSa0H",
"rxhYT0Mq5X",
"q2Kcfqb0BL",
"poaCpc7znf",
"oZTWGV3G1l",
"lMneM9iHFb",
"gIBxxSD2Q0",
"RaeRMEW98b",
"PyckTxUObW",
"KpLD5XdYnn",
"Jqv0UBa2Nd",
"JICGSkX8RT",
"HqeOezcKZ8",
"G98WypY4Bq",
"9ZaDxgzgji",
"8kmMGZ1cqX",
"8PRWybxuM2",
"87NwzTHUJN",
"61cwujQzSO",
"5l5gyiSvWY",
"4u2Cx7PFVR"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1730710989175,
1732541227482,
1732076608885,
1734968218489,
1730253735567,
1732637113013,
1732544467895,
1732543278069,
1730820377593,
1732637179717,
1732633898504,
1730665987654,
1732539539122,
1732543143394,
1732549039612,
1732076775525,
1737523444655,
1732076287460,
1732076332185,
1732076360615,
1732536389517,
1732636981726,
1732548686931
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1270/Reviewer_7gyf"
],
[
"ICLR.cc/2025/Conference/Submission1270/Reviewer_NrB8"
],
[
"ICLR.cc/2025/Conference/Submission1270/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1270/Area_Chair_a9GC"
],
[
"ICLR.cc/2025/Conference/Submission1270/Reviewer_NrB8"
],
[
"ICLR.cc/2025/Conference/Submission1270/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1270/Reviewer_NrB8"
],
[
"ICLR.cc/2025/Conference/Submission1270/Reviewer_GmUK"
],
[
"ICLR.cc/2025/Conference/Submission1270/Reviewer_hzx1"
],
[
"ICLR.cc/2025/Conference/Submission1270/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1270/Reviewer_hzx1"
],
[
"ICLR.cc/2025/Conference/Submission1270/Reviewer_GmUK"
],
[
"ICLR.cc/2025/Conference/Submission1270/Reviewer_GmUK"
],
[
"ICLR.cc/2025/Conference/Submission1270/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1270/Reviewer_7gyf"
],
[
"ICLR.cc/2025/Conference/Submission1270/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission1270/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1270/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1270/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1270/Area_Chair_a9GC"
],
[
"ICLR.cc/2025/Conference/Submission1270/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1270/Area_Chair_a9GC"
]
],
"structured_content_str": [
"{\"summary\": \"The paper introduces CFG++, a novel approach to fixing the off-manifold issues that can occur in CFG (classifier-free guidance) during sampling. They reformulate CFG as a manifold-constrained problem, transforming the conditional guidance from extrapolation to interpolation between unconditionally sampled trajectories and conditionally sampled trajectories, thus making the guidance more interpretable. Experimental results show that the method can reduce the artifact, solve the DDIM inversion problem, and be integrated into the high-order diffusion solvers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. CFG++ introduces a novel approach by redefining classifier-free guidance as a constrained manifold problem, which effectively improves the generation quality by staying within the data manifold.\\n2. This paper validates the effectiveness of the proposed method on standard benchmarks and different tasks (e.g., text-to-image generation, inversion, editing).\\n3. The paper is well-written and easy to follow.\", \"weaknesses\": \"1. There are a few minor issues in the tables, such as the direction of the arrow in the fourth column of CLIP in Table 1 (which should be as high as possible) and the missing ImageReward metric for SD v1.5 in Table 2.\\n\\n2. This paper focuses on addressing the inability to perform successful DDIM inversion with classifier-free guidance, so it should include a comparison of the results with this class of methods like [1].\\n\\n3. I'm a little suspicious of ''Contrary to the widespread belief that these are inherent limitations of diffusion models, this paper reveals that these problems actually stem from outrageous phenomena associated with CFGs...'' in the Abstract. As far as I know, current work has generally recognized and worked for the CFG problem. Perhaps rewording would better reflect the originality of the paper.\\n\\n[1] Mokady, Ron, et al. \\\"Null-text inversion for editing real images using guided diffusion models.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.\", \"questions\": \"1. SDXL-Turbo does not make use of guidance_scale or negative_prompt. Instead, they disable it with guidance_scale=0.0. I would like to know how to verify the validity of CFG/CFG++ in this case.\\n\\n2. Does this method also work for negative prompts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer GmUK,\\n\\nBeing a reviewer is an extremely responsible task. Given that you have given such high ratings and yet cannot tolerate others' opinions, can we suspect that you have special interests with the author and request to disregard your positive reviews?\"}",
"{\"comment\": \"**Q1.** Derivations for flow-matching models such as SD3 or FLUX?\\n\\n**A.** Thank you for the insightful suggestions. Please see Appendix B, where we generalize CFG++ with flow matching which subsumes diffusion models as specific instances. Specifically, vanilla CFG can be applied to the flow-based generative models as follows [1]:\\n\\n$dx_t = [v_\\\\theta(x_t, t, \\\\varnothing) + \\\\omega (v_\\\\theta(x_t, t, c) - v_\\\\theta(x_t, t, \\\\varnothing))] dt$\\n\\nThis can be solved by using various off-the-shelf ODE solvers detailed in Appendix A, fully reproducing the derivations of CFG++. We believe your suggestions open new avenues for text-conditional sampling in flow-based generative modeling, and we look forward to future developments in this direction.\\n\\n**Q2.** Is it possible to disentangle two noises used in the denoising and renoising process? For example, we can set different hyperparameters and for , and use this term in different weights for denoising and renoising respectively. Setting is the same as CFG and setting is the same as CFG++.\\n\\n**A.** Great question. Please see Appendix D, where we set CFG and CFG++ as two special cases of a more general form of guidance functions. From Appendix C (and also similar to how the reviewer derived it), it is evident that the guidance scale is a composition of two different functions. CFG is a special case where the weighting between these two functions is identical, and CFG++ is a special case where one of the functions is turned off. Interestingly, by turning one of the functions off, CFG++ induces a smooth increase in the guidance scale, whereas CFG has a sharp peak in the starting point, potentially explaining the saturation behavior in the earlier stages. Interpolating between these two yields different forms of guidance schedules, where the sharp peaks are mitigated as they get closer to CFG++. We report the results of each case here.\\n\\n**Q3.** Extension of CFG++ to other solvers can be derived in a similar fashion to DDIM. Specifically, in order to solve inverse problems with higher-order (or stochastic) solvers, one would keep all the components null-conditioned, and only modulate the Tweedie component. This is how Eq. (15) can be derived from Eq. (14). We modified Eq. (15) so that this is more clear:\\n\\n$x_i = (\\\\hat{x}(x_{i-1};\\\\varnothing) - \\\\lambda \\\\nabla_{\\\\hat{x}(x_{i-1};\\\\varnothing)} \\\\ell_{sds}(\\\\hat{x}(x_{i-1};\\\\varnothing))) + a_i \\\\hat{x}(x_{i-1};\\\\varnothing) + b_i \\\\hat{x}(x_{i-2};\\\\varnothing) + c_i x_{i-1} + d_i \\\\epsilon$\\n\\n**Q4.** It may be incorrect to claim \\\"these findings are orthogonal to ours and keep the sampling trajectory the same ... CFG++ is designing a different trajectory\\\"\\n\\n**A.** Thank you for pointing this out. We removed this statement from our manuscript.\\n\\n\\n**References**\\n\\n[1] Kim, Beomsu, et al. \\\"Simple ReFlow: Improved Techniques for Fast Flow Models.\\\" arXiv preprint arXiv:2410.07815 (2024).\"}",
"{\"metareview\": \"This paper works on the Classifier-free guidance (CFG) in diffusion models for text-guided generation. It tackles the the off-manifold problem in CFG by proposing a simple revision to the original CFG, resulting in improvements with better sample quality for text-to-image generation. Experimental results demonstrate the effectiveness in text-to-image generation, DDIM inversion, editing, and image inverse problems. Four reviewers (hzx1, 7gyf, GmUK,NrB8) gave scores of 6, 6, 8, 1 after rebuttal. The first three reviewers are positive on the simplicity and experimental justification of improvement over CFG. There are some debates on the fourth reviewer' comments in the post-rebuttal phase. The reviewer NrB8 initially raised several questions and the authors answered these questions in the rebuttal. There is controversy on the reviewer NrB8 's question in the discussion phase. After discussion, the reviewer NrB8 rated score of 1 in the final decision. The AC has asked reviewer NrB8 to provide reasons of decision and remaining concerns on the authors' responses, but he/she did not respond to the authors' rebuttal. Therefore, the rating of reviewer NrB8 is not fully supportive. Considering the first three reviewers' positive final decisions, the paper can be accepted, but the authors are suggested to further revise the paper considering these discussions.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer hzx1 questioned on the CFG method's formulation, the claim that CFG++ is invertible and brings more diversity, and also suggested reporting average performance, etc. Reviewer 7gyf raised questions on comparison with methods like [1], revision on a claim in abstract, validity of CFG/CFG++ with guidance_scale=0.0, etc. Reviewer GmUK is more positive and suggested, e.g., working on flow matching-based generative models, extension of CFG++ to other diffusion solvers, etc. These reviewers are mostly satisfied with the authors' rebuttal in the discussion phase. Reviewer NrB8 initially raised several concerns/suggestions on limited scope of experiments, detailed comparison with a broader array of state-of-the-art techniques in diffusion models, providing a rigorous mathematical framework, how CFG++ scales with increasing model size or complexity, etc. These comments seem to be general, and the authors have answered these concerns and some of these answers have been in the submitted manuscript. There are some debates on the Reviewer NrB8 's comments, but the reviewer NrB8 did not provide supports for his final rating with score of 1. Therefore, the first three reviewers' comments are taken into account with higher weights in the final decision.\"}",
"{\"summary\": \"The paper presents CFG++, an innovative approach designed to enhance the performance of diffusion models by addressing limitations associated with traditional Classifier-Free Guidance (CFG). The authors argue that several drawbacks of CFG, such as the lack of invertibility in DDIM (Denoising Diffusion Implicit Models) and the issues of mode collapse when high guidance scales are used, arise from the off-manifold phenomena related to CFG rather than the inherent characteristics of diffusion models themselves.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper exhibits a high degree of originality by introducing the CFG++ framework, which fundamentally reinterprets the limitations of traditional Classifier-Free Guidance (CFG). Instead of merely enhancing existing methodologies, the authors propose that many issues attributed to diffusion models arise from off-manifold behavior. This shift in perspective not only challenges established assumptions but also provides a fresh lens through which to analyze and improve guidance mechanisms in generative models. By integrating recent advancements in inverse problem-solving within diffusion contexts, the authors demonstrate a creative synthesis of ideas that broadens the scope of CFG applications.\\n\\n\\nThe quality of the research is evident in both its theoretical foundations and empirical validation. The authors articulate their arguments clearly, supported by robust mathematical formulations and well-structured experiments. The comparative analysis illustrates the advantages of CFG++ over prior methods, effectively showcasing the model\\u2019s performance improvements in terms of reduced artifacts and enhanced image quality. Additionally, the use of rigorous benchmarks for evaluation strengthens the credibility of the findings. The paper\\u2019s methodology is sound, making it a valuable contribution to the field.\\n\\n\\nThe clarity of the writing is commendable, as the authors navigate complex concepts with precision. The introduction provides a succinct overview of the issues addressed and the motivations behind the proposed solution. Throughout the paper, technical jargon is well-defined, ensuring accessibility for readers with varying levels of expertise. Figures and diagrams are effectively utilized to illustrate key points, facilitating a better understanding of the model's mechanics and results. Overall, the paper is well-organized, which enhances its readability and facilitates comprehension.\", \"weaknesses\": \"One notable weakness is the limited scope of experiments presented. While the authors provide comparative results demonstrating the efficacy of CFG++, the evaluation primarily focuses on specific datasets or tasks. To enhance the robustness of their claims, the authors could include a broader range of benchmarks across diverse domains. For instance, incorporating tasks from different image generation contexts (e.g., artistic style transfer, super-resolution) would provide a more comprehensive assessment of CFG++\\u2019s performance and generalizability. The authors should extend their experimental framework to include varied datasets and tasks, thereby demonstrating the versatility and applicability of CFG++ in different settings.\\n\\n\\nAlthough the paper claims improvements over prior methods, it lacks a detailed comparison with a broader array of state-of-the-art techniques in diffusion models. For example, a comparison with models like **Stable Diffusion** or recent advancements in **Guided Diffusion** could contextualize the advantages of CFG++ more clearly. The authors should include direct comparisons with a wider selection of contemporary approaches, utilizing standardized metrics (such as FID scores) to provide a clearer understanding of CFG++\\u2019s standing in the current landscape.\\n\\nWhile the paper introduces the concept of manifold constraints and off-manifold behavior, the theoretical underpinnings could be further strengthened. The authors briefly mention these concepts but do not provide a rigorous mathematical framework that details how CFG++ operates under these constraints or why these adjustments yield better results. A more thorough theoretical analysis, including mathematical proofs or derivations that illustrate the benefits of operating within manifold constraints, would bolster the paper's credibility and understanding.\\n\\n\\nThe paper does not address how CFG++ scales with increasing model size or complexity. As generative models continue to grow, understanding how CFG++ performs under these conditions is crucial. There is a lack of discussion regarding potential computational overhead or challenges in real-world applications. Including analyses or experiments that assess the scalability of CFG++ with larger models or datasets, along with a discussion on computational efficiency, would provide valuable insights for practitioners considering its application.\\n\\nThe paper primarily focuses on quantitative metrics, such as image quality, but does not incorporate user-centric evaluations. Assessments based on human judgment, such as user studies to evaluate the perceived quality or usefulness of generated images, could provide a more holistic view of CFG++\\u2019s effectiveness. The authors should consider conducting qualitative studies where human evaluators assess the output of CFG++ compared to other methods. This could provide insights into the model's real-world applicability and user satisfaction.\\n\\n### Conclusion\\nWhile the paper \\\"CFG++: Manifold-Constrained Classifier-Free Guidance for Diffusion Models\\\" presents significant contributions, addressing these weaknesses would enhance its overall impact and clarity. By broadening experimental evaluations, strengthening theoretical foundations, and considering scalability and user-centric perspectives, the authors can more effectively support their claims and goals.\", \"questions\": \"**Question**: Could you elaborate on the specific mechanisms through which off-manifold behavior impacts the performance of traditional Classifier-Free Guidance (CFG)? A detailed explanation of how off-manifold behavior is identified and measured within the context of your experiments would enhance understanding. Including visualizations or theoretical examples could further clarify this concept.\\n\\n**Question**: What are the reasons behind selecting the specific datasets used in your experiments? Are there plans to test CFG++ on a wider array of datasets or tasks? Expanding the experimental framework to include diverse datasets and applications would strengthen the validity of your findings. Consider including datasets from different domains (e.g., text-to-image generation, video synthesis) to demonstrate the versatility of CFG++.\\n\\n**Question**: Why were certain state-of-the-art methods, such as Stable Diffusion and other recent advances in guided diffusion, excluded from your comparative analysis? Including a more comprehensive set of comparisons with these methods would provide a clearer context for evaluating the performance of CFG++. Detailed performance metrics and qualitative results could enrich the discussion.\\n\\n**Question**: Can you provide more in-depth mathematical justifications or derivations that support the effectiveness of CFG++ in addressing the limitations of traditional CFG? A rigorous theoretical framework would enhance the paper's credibility. Detailed mathematical formulations explaining how CFG++ operates under manifold constraints could help bridge the gap between theory and practice.\\n\\n **Question**: How does CFG++ perform in terms of computational efficiency and scalability with larger models or datasets? Including a discussion on scalability, including any computational overhead observed during experiments, would be valuable. Experiments assessing performance on larger models could illustrate the practical applicability of CFG++ in real-world scenarios.\\n\\n **Question**: Have you considered conducting user studies to evaluate the perceived quality and usefulness of the outputs generated by CFG++? Incorporating qualitative assessments from human evaluators could provide insights into the model's effectiveness from a user perspective. Such studies could help highlight the practical implications of using CFG++ in various applications.\\n\\n\\n**Question**: What future research directions do you foresee emerging from your findings regarding CFG++ and manifold constraints in diffusion models? A discussion on potential extensions or related areas of research would help contextualize your contributions within the broader landscape of machine learning, inviting collaboration and further inquiry.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"There are no Ethics Concerns\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We would like to thank reviewer ```GmUK``` for the positive and constructive comments throughout the review process. We are glad that we resolved the concerns.\"}",
"{\"comment\": \"I have been serving as a community reviewer for over a decade, and this is the first time I have encountered a situation where a reviewer comments on the opinions of another reviewer. I believe this is beyond the purview of a reviewer. A reviewer's role is to provide an evaluative assessment of the article's value to the Associate Editor (AC). It is the AC's responsibility to determine whether this evaluation is reasonable and valuable. In the case of this review, such an unusual occurrence is beyond my authority to judge and should be left to the AC for the final evaluation.\\n\\n**I have concerns about the fairness of this review process.**\"}",
"{\"comment\": \"Dear Reviewer NrB8,\\n\\nThank you very much for the instant reply. It alleviates my suspicion that you might be a robot. I'm merely curious how you managed to write such extensive reviews while assigning an extremely low confidence score (i.e., 1, which means \\\"You are unable to assess this paper and have alerted the ACs to seek an opinion from different reviewers.\\\"). Everyone has the right to express their thoughts on OpenReview as a human being (as long as not a robot), including you, and as you mentioned, \\\"being a reviewer is an extremely responsible task\\\".\\n\\nThank you again for your efforts in encouraging high-quality reviews for ICLR.\\n\\nBest regards,\\nReviewer GmUK\"}",
"{\"summary\": \"This paper proposes to tackle Classifier Free Guidance's (CFG) limitations, particularly its lack of invertibility and mode collapse. The authors formulate the hypothesis that these limitations come from the off-manifold phenomenon. The proposed method leverages recent approaches developed for inverse problem-solving through diffusion models. The proposed method named CFG++ is compared with CFG for text-to-image generation, image edition or image restoration.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper is generally well-written with a useful high-level preliminary section. One of the strengths of the method is its simplicity while being well justified. The paper proposes many experiments showing relative gains compared to CFG.\", \"weaknesses\": [\"The CFG method formulation seems different than in the original paper see eq (6) from (Ho and Salimans, 2021). Moreover, the $w=0$ setting in the experiments should be equivalent to no guidance at all when using CFG according to the eq (6) (from the CFG++ method). In Fig. 6, however, the images appear edited with $w=0$ indicating some contradiction.\", \"The ImageReward metric is not defined. The FID and CLIP metrics are very similar between CFG and CFG++ especially when using SD v1.5. In Tab. 1 the FID and CLIP are also very close for both methods. This tends to indicate that the gains are relatively marginal even in a low NFE setting.\", \"The statement that the proposed CFG++ is invertible and brings more diversity in the generation is not evaluated.\", \"**Typos**\", \"The variable $x_c$ is not introduced in line 228\", \"Table 1., column 2, Clip score arrow does not show the correct direction\", \"line 238 ``a crucial difference in the **renoising** process'' shoulnd't it be denoising?\"], \"questions\": [\"Having the average performance across the different corruption in Tab. 3 would help. Here, the proposed method seems more efficient than CFG to retrieve the clean image according to the FID metric but not so much according to the LPIPS one. Do the authors have some justifications?\", \"It is unclear why the performances of CFG drop while augmenting the NFE in Fig. 6 (b).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We thank reviewer ```7gyf``` for the constructive comments. We would be happy to further discuss if the reviewer has any further questions later on.\"}",
"{\"comment\": \"I would like to thank the authors for their reply. Most of my concerns have been addressed and I am rather positive on the rebuttal.\\n\\nThe proposed CFG++ reduces the invertibility error, but this is not equivalent to being invertible as claimed in the paper contributions (l. 107). I suggest that the authors ease this claim in the introduction.\\n\\nIn the updated version of the paper, the ImageReward of SD v1.5 (DPM++ 2M) seems particularly low and CFG outperforms CFG++. This result is not commented on. Is there a reason explaining this behavior?\"}",
"{\"summary\": \"This work introduces an enhanced version of the widely used classifier-free guidance (CFG) technique, CFG++. By formulating conditional generation as an inverse problem and utilizing score distillation sampling (SDS) loss, CFG++ improves various tasks, including text-to-image generation, DDIM inversion-based editing and text-conditioned inverse problems.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"Overall, this manuscript is excellent, and I hope it gets accepted. Below, I outline some strengths that support this view:\\n\\n1. **Well-written**: The paper is well-written with coherent logic and clear notation, making it easy to understand and follow.\\n\\n2. **Thorough Experiments**: The work studies many downstream tasks, including text-to-image generation for SDv1.5, SDXL, and their distilled versions, as well as DDIM inversion and editing, PF-ODE trajectories, and text-conditioned inverse problems. The authors provide extensive quantitative and qualitative results to show the superiority of the proposed method.\\n\\n3. **Intriguing Derivation and Analysis**: The derivation of CFG++ is intriguing by treating text-conditioned generation as an inverse problem and utilizing SDS as the loss function. Moreover, the authors present various perspectives to analyze CFG++, including manifold geometry, score matching loss throughout the denoising process, and the evolution of the posterior mean. These analyses are insightful and helpful in understanding the underlying mechanisms of CFG++.\\n\\n4. **Simple Yet Widely Applicable Method**: The proposed method essentially modifies the re-noising process of the original CFG, but it achieves improvements across various downstream tasks and could be potentially effective for other tasks relevant to the ICLR community. Additionally, the DDIM solver, as well as other popular diffusion solvers like EDM and DPM-Solver, are derived, establishing CFG++ as a general method.\", \"weaknesses\": \"Generally, I think there are no obvious weaknesses in this work. However, there are some questions and please refer to the Questions part.\", \"questions\": \"1. Only diffusion-/score matching-based generative models are discussed in this work. Could you please **provide some similar derivations for the recent flow matching-based generative models, such as SD3 and FLUX**? I believe similar conclusions stand for flow-based methods, and it will make this work more comprehensive.\\n\\n2. Is it possible to **disentangle two noises used in the denoising and renoising process**? For example, we can set different hyperparameters $\\\\lambda_1$ and $\\\\lambda_2$ for $(\\\\epsilon_c-\\\\epsilon)$, and use this term in different weights for denoising and renoising respectively. Setting $\\\\lambda_1=\\\\lambda_2$ is the same as CFG and setting $\\\\lambda_2=0$ is the same as CFG++.\\n\\n3. Is it possible to **provide derivations of CFG++ for other diffusion solvers**, except DDIM? Extensions of CFG++ to other solvers in Appendix A are more like intuitive understanding, rather than derivations from inverse problems and SDS loss, as done for DDIM.\\n\\n4. Essentially, **CFG++ can be written as reweighted CFG whose $\\\\omega$ varies along the sampling process** (let $s\\\\coloneqq t-1$ and $\\\\epsilon\\\\coloneqq \\\\epsilon_\\\\emptyset$ for easier LaTeX rendering in OpenReview):\\n\\n- For DDIM CFG: $$\\\\mathbf{x}_s=\\\\frac{\\\\sqrt{\\\\bar{\\\\alpha}_s}}{\\\\sqrt{\\\\bar{\\\\alpha}_t}}\\\\mathbf{x}_t-\\\\frac{\\\\sqrt{(1-\\\\bar{\\\\alpha}_t)\\\\bar{\\\\alpha}_s}-\\\\sqrt{(1-\\\\bar{\\\\alpha}_s)\\\\bar{\\\\alpha}_t}}{\\\\sqrt{\\\\bar{\\\\alpha}_t}}(\\\\omega\\\\epsilon_c-(\\\\omega-1)\\\\epsilon)$$\\n\\n- For DDIM CFG++: $$\\\\mathbf{x}_s=\\\\frac{\\\\sqrt{\\\\bar{\\\\alpha}_s}}{\\\\sqrt{\\\\bar{\\\\alpha}_t}}\\\\mathbf{x}_t-\\\\frac{\\\\sqrt{(1-\\\\bar{\\\\alpha}_t)\\\\bar{\\\\alpha}_s}}{\\\\sqrt{\\\\bar{\\\\alpha}_t}}(\\\\lambda\\\\epsilon_c-(\\\\lambda-1)\\\\epsilon)+\\\\sqrt{1-\\\\bar{\\\\alpha}_s}\\\\epsilon$$\\n$$=\\\\frac{\\\\sqrt{\\\\bar{\\\\alpha}_s}}{\\\\sqrt{\\\\bar{\\\\alpha}_t}}\\\\mathbf{x}_t-\\\\frac{\\\\sqrt{(1-\\\\bar{\\\\alpha}_t)\\\\bar{\\\\alpha}_s}-\\\\sqrt{(1-\\\\bar{\\\\alpha}_s)\\\\bar{\\\\alpha}_t}}{\\\\sqrt{\\\\bar{\\\\alpha}_t}}(\\\\frac{\\\\lambda\\\\sqrt{(1-\\\\bar{\\\\alpha}_t)\\\\bar{\\\\alpha}_s}}{\\\\sqrt{(1-\\\\bar{\\\\alpha}_t)\\\\bar{\\\\alpha}_s}-\\\\sqrt{(1-\\\\bar{\\\\alpha}_s)\\\\bar{\\\\alpha}_t}}\\\\epsilon_c-\\\\frac{(\\\\lambda-1)\\\\sqrt{(1-\\\\bar{\\\\alpha}_t)\\\\bar{\\\\alpha}_s}+\\\\sqrt{(1-\\\\bar{\\\\alpha}_s)\\\\bar{\\\\alpha}_t}}{\\\\sqrt{(1-\\\\bar{\\\\alpha}_t)\\\\bar{\\\\alpha}_s}-\\\\sqrt{(1-\\\\bar{\\\\alpha}_s)\\\\bar{\\\\alpha}_t}}\\\\epsilon)$$\\n\\nSo, $$\\\\omega=\\\\frac{\\\\lambda\\\\sqrt{(1-\\\\bar{\\\\alpha}_t)\\\\bar{\\\\alpha}_s}}{\\\\sqrt{(1-\\\\bar{\\\\alpha}_t)\\\\bar{\\\\alpha}_s}-\\\\sqrt{(1-\\\\bar{\\\\alpha}_s)\\\\bar{\\\\alpha}_t}}.$$\\n\\nAs discussed in Sec. 5, previous studies also propose adjusting the guidance scale across timesteps. However, it may be incorrect to claim that \\\"these findings are orthogonal to ours and keep the sampling trajectory the same ... CFG++ is designing a different trajectory\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I appreciate the authors' replies and have also checked the other reviews. All my concerns are well addressed. Generally speaking, I like this work and strongly lean to accept it as the ICLR main conference paper. Meanwhile, I strongly suspect that reviews from reviewer NrB8 are generated by an LLM irresponsibly, so those reviews should be ignored.\"}",
"{\"title\": \"Reply to reviewer\", \"comment\": \"We appreciate the role of reviewers in maintaining the integrity and quality of submissions, which requires thoughtful evaluations. However, we must express our concern regarding the tone and approach of the reviewer's recent comments.\\n\\nWhile we respect the reviewer's right to form opinions about this work, we find it deeply troubling that you **accused another reviewer of potential bias** and questioned their integrity **without evidence**. Such allegations undermine the collegial and constructive environment essential for academic discourse. Additionally, the significant shift in the reviewer's score (**5 -> 1**), accompanied by an increase in confidence (**1 -> 5**), appears inconsistent with the review process's expectations for objective and evidence-based evaluation. It is hard to understand the sudden change in score where there were **no comments** regarding our faithful response to the reviewer's comments.\\n\\nIf there are specific and substantiated concerns about our work, we are more than willing to address them constructively. However, ad hominem remarks and unsupported claims detract from the professionalism expected in peer review.\\n\\nWe kindly request that we focus on the technical and scientific merits of the submission to ensure a fair and transparent evaluation process. Maintaining professionalism and mutual respect is critical for the credibility and success of this conference and the community it serves.\\n\\nBest regards,\\nauthors\"}",
"{\"title\": \"Reply to the authors\", \"comment\": \"I appreciate the effort put into clarifying the points raised and improving the submission. While I recognize the potential and contributions of this work, I believe it does not yet fully meet the bar for the next score tier (8) in terms of impact or completeness. Therefore, I will retain my current score of 6, which reflects my positive view of the work and my opinion that it could be considered for acceptance.\"}",
"{\"comment\": \"In contrast to the reviewer\\u2019s comments, most of the questions can **already be answered by reading the paper**. We gently remind the reviewer to go through the paper carefully. We provide answers to the set of questions that we feel are valid:\\n\\n**Q1.** Off-manifold behavior of CFG\\n\\n**A.** Please see Fig. 3, along with its analysis in Section 3.2.\\n\\n**Q2.** Choice of dataset\\n\\n**A.** We chose the COCO benchmark as it is the standard benchmark for the quantitative evaluation of T2I. Please let us know if the reviewer thinks that this is insufficient to validate our method along with the reason, and we would be happy to accommodate.\\n\\n**Q3.** Mathematical justification\\n\\n**A.** Section 3.2 is fully devoted to the mathematical justification of CFG++. Further analysis is given in Appendix C. It would be great if the reviewer could elaborate on what more we should do.\\n\\n**Q4.** Why were certain SOTA methods such as Stable Diffusion and other recent advances in guided diffusion excluded from the comparison?\\n\\n**A.** We would be happy to further compare CFG++ with other methods if feasible. Stable Diffusion is a model that we already used in our experiments, and it is not something comparable to CFG++. Most \\u201cguided diffusion\\u201d methods are orthogonal to the advances made in this manuscript.\\n\\n**Q5.** How does CFG++ perform in terms of computational efficiency and scalability with larger models?\\n\\n**A.** We already tested our method on various models, including SD1.5, SDXL, SDXL-Lightning, and Turbo. CFG++ worked consistently well across all model classes. We note that there is no computational overhead for CFG++ when compared against CFG.\\n\\n**Q6.** User-centric evaluations\\n\\n**A.** We conducted a user study with 18 participants to evaluate the quality of images generated using the CFG and CFG++ methods. The study involved an A/B test, where participants were shown pairs of images and asked to compare them based on overall quality and text alignment, selecting the image they found superior. The test included 12 images: 4 generated with SD1.5 and 8 with SDXL-lightning. All the images were chosen from those featured in the paper to maintain relevance to the study. The results showed a clear preference for the CFG++ method, with 81.4% of responses favoring CFG++ images, significantly outperforming CFG, which was preferred in only 18.6% of cases.\\n\\n**Q7.** Possible future studies\\n\\n**A.** Viewing text guidance as an optimization problem similar to the literature of diffusion model-based inverse problem solvers, we can easily extend our formulation to various types of guidance, including negative guidance [1] , composition of different guidance [2], etc. Indeed, we are already seeing interesting applications of CFG++ to different domains, including guided sampling in video diffusion models [3], fairness [4], and more. We are excited to see future works in this direction.\\n\\n\\n**References**\\n\\n[1] Koulischer, Felix, et al. \\\"Dynamic Negative Guidance of Diffusion Models.\\\" 2024.\\n\\n[2] Liu, Nan, et al. \\\"Compositional visual generation with composable diffusion models.\\\" ECCV 2022.\\n\\n[3] Lee, Dohun, et al. \\\"VideoGuide: Improving Video Diffusion Models without Training Through a Teacher's Guide.\\\" , 2024.\\n\\n[4] Um, Soobin, and Jong Chul Ye. \\\"MinorityPrompt: Text to Minority Image Generation via Prompt Optimization.\\\", 2024.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"General Response\", \"comment\": \"We thank the reviewers for their constructive, positive, and thorough reviews. We are happy that the reviewers think that our paper is **well-written** and **easy to understand** (```hzx1```, ```7gyf```, ```GmUK```, ```NrB8```), **well-grounded with theory** (```GmUK```, ```NrB8```), and conducts **thorough experiments** (```hzx1```, ```GmUK```, ```NrB8```). For a point-to-point response on the weaknesses and the questions, please see our responses below.\"}",
"{\"comment\": \"**W1.** The CFG method formulation seems different in [1]. Moreover, the setting in the experiments should be equivalent to no guidance at all when using CFG according to Eq. (6). In Fig. 6, images appear edited.\\n\\n**A.** There are two different ways of expressing CFG: one that recovers the conditional when the guidance scale is 0 (used in [1]), \\n\\n$\\\\epsilon_{c}(x_t) + \\\\omega'(\\\\epsilon_{c}(x_t) - \\\\epsilon_{\\\\varnothing}(x_t))$\\n\\nand one that recovers the unconditional when the guidance scale is set to 0. \\n\\n$\\\\epsilon_{\\\\varnothing}(x_t) + \\\\omega(\\\\epsilon_{c}(x_t) - \\\\epsilon_{\\\\varnothing}(x_t))$\\n\\nWe advocate for the latter, as this view provides better flexibility, such as composition, negation, etc. Note that for the former parametrization, we can always recover the latter by setting $\\\\omega' = -1 + \\\\omega$. In Fig. 6, the results are different from the source image even for CFG because there exists inversion errors, not because the image is changed through guidance.\\nThe reviewer is kindly remind that the difficulty of inversion in CFG is well-known in literature.\\n\\n**W2.** ImageReward metric is not defined. This should also be reported for Tab. 1\\n\\n**A.** Thank you for pointing this out. For clarity, we have now included a brief definition and citation of the ImageReward metric in Section 4.1. We agree that the gains in the quantitative metric are not dramatic. However, this is understandable as our method is a simple adjustment to the sampling scheme with no computation overhead. Considering this, consistent FID gains throughout all guidance scales are an important advantage. Moreover, the advantage of CFG++ does not only stem from the resulting quality of the samples, but also from the generation trajectory, reduced inversion errors, and computability to other downstream tasks such as inverse problem-solving. Finally, CFG++ achieves remarkable improvements in distillation models (ImageReward of SDXL-Turbo: 0.777 \\u2192 0.968, SDXL-Lightning: 0.691 \\u2192 0.829). This highlights the unexplored side effects of CFG in low-NFE settings.\\n\\n**W3.** The statement that the proposed CFG++ is invertible and brings more diversity in the generation is not evaluated.\\n\\n**A.** We show theoretically why the DDIM inversion error is smaller for CFG++ in Eqs. (21), (22). We empirically validate our argument in Fig. 6, where we show much better reconstruction results. In terms of diversity, it is hard to show that CFG++ enhanced the diversity of the samples. However, as can be seen in the PF-ODE trajectory of the generation shown in the (bottom) of Fig. 1, we can deduce that CFG++ will have a higher chance of being diverse, as the generation is done in a coarse-to-fine manner, as opposed to CFG, where the details are already pre-configured in the earlier stages.\\n\\n**Typo1,2.** Fixed.\\n\\n**Typo3.** This should be \\u201crenoising\\u201d. As can be seen in the difference between Alg. 1 and 2, the crucial difference lies in the difference in line 4, where the noise is added back to achieve.\\n\\n**Q1.** Average performance across the different corruption in Tab. 3 would help. CFG++ seems more efficient than CFG for FID but not so much according to LPIPS. Any justifications?\\n\\n**A.** We modified our table to include the average score, demonstrating that our method outperforms others in most aspects. Much of the improvements by using CFG++ instead of CFG were seen in the perceptual quality of the reconstructions, rather than the distortion [2]. FID metrics capture this well, and we think this is the reason why the FID shows the most pronounced improvement among all the different metrics. However, even for LPIPS, we do see that our method outperforms the baselines.\\n\\n\\n**References**\\n\\n[1] Ho, Jonathan, and Tim Salimans. \\\"Classifier-free diffusion guidance.\\\", 2022.\\n\\n[2] Blau, Yochai, and Tomer Michaeli. \\\"The perception-distortion tradeoff.\\\" CVPR 2018.\"}",
"{\"comment\": \"**W1.** Few minor issues in tables, ImageReward missing in Tab. 1\\n\\n**A.** Thank you for the comment. We fixed the manuscript accordingly.\\n\\n**W2.** The paper focuses on addressing the inability to perform DDIM inversion with CFG, so it should compare against NTI [1]\\n\\n**A.** There are two major differences between CFG++ and NTI that inhibit an apples-to-apples comparison. First, NTI requires a compute-heavy null text optimization process to correct for deviations from the manifold, whereas CFG++ incurs **no additional computational overhead**. Second, NTI **does not use CFG** during inversion, relying instead on the conditional noise estimate with the guidance scale set to 1. For these reasons, and due to time constraints, we do not include the comparisons.\\n\\n**W3.** Is \\\"ontrary to the widespread belief that these are inherent limitations of diffusion models, this paper reveals that these problems actually stem from the off-manifold phenomena associated with CFGs...\\\" correct?\\n\\n**A.** Thanks for the comment. We revised \\u201cContrary to the widespread belief that these are inherent limitations of diffusion models, this paper reveals that these problems actually stem from the off-manifold phenomenon\\u201d to \\u201cThis paper reveals that the problems may stem from the off-manifold phenomenon\\u201d\\n\\n**Q1.** SDXL-Turbo does not make use of guidance scale or negative prompt. Instead, they disable it. How if CFG++ valid in this case?\\n\\n**A.** SDXL-Lightning and Turbo distill the classifier-free guided teacher diffusion models, meaning they also inherently depend on CFG but with a fixed guidance scale. Thus, after text-conditional denoising, the renoising process can be rectified similarly to SDXL by leveraging the unconditional noise for renoising with null conditioning.\\n\\n**Q2.** Does this method also work for negative prompts?\\n\\n**A.** Yes, the same argument applies. However, due to the different consequences of using negative prompts [2], we focused our experiment on positive prompts. It would be an interesting direction of research to study the incorporation of dynamic negative guidance [2] and CFG++ in the future, which is out of the scope of this work.\\n\\n\\n**References**\\n\\n[1] Mokady, Ron, et al. \\\"Null-text inversion for editing real images using guided diffusion models.\\\" CVPR 2023.\\n\\n[2] Koulischer, Felix, et al. \\\"Dynamic Negative Guidance of Diffusion Models.\\\", 2024.\"}",
"{\"title\": \"Please check the authors' responses\", \"comment\": \"Dear reviewers,\\n\\nCould you please check the authors' responses, and post your message for discussion or changed scores?\\n\\nbest,\\n\\nAC\"}",
"{\"comment\": \"We would like to thank the reviewer for the constructive feedback. We are glad that most of the concerns were addressed.\\n\\n1. We agree that we could tone the claim down. We modified L107 to \\\"Furthermore, CFG++ reduces the inversion error, enhancing and simplifying image reconstruction, as well as editing\\\"\\n\\n2. This outcome differs from the consistent improvement of CFG++ over CFG observed with 50 NFE DDIM sampling. We attribute this to two factors: (1) the overall image quality for 20 NFE DPM++ 2M is generally lower compared to 50 NFE DDIM, leading to noisier quantitative metrics, particularly for metrics like ImageReward that are sensitive to subtle quality changes; and (2) the difference between CFG and CFG++ tends to be less pronounced in low NFE regimes without distillation, as stronger guidance effects emerge with higher NFEs.\\nMoreover, the primary goal of this experiment was to demonstrate the compatibility of CFG++ with higher-order solvers, rather than to establish its superiority. The results indicate that CFG++ effectively integrates with such solvers, and its desirable properties become more apparent as the number of NFEs increases.\\n\\nWe made this clear in the revised **Section 4.1**. Please let us know if you have any further questions.\"}",
"{\"title\": \"Please focus on the evaluation of the submission\", \"comment\": \"Dear reviewers and authors,\\n\\nLet us refocus on the evaluation of the submission itself. The ACs will make a fair decision based on the provided comments and any remaining concerns following the rebuttal and discussion phases.\", \"reviewer_nrb8\": \"Could you please read the authors' responses and the submitted paper to determine if your concerns have been adequately addressed? Additionally, please participate in the discussion and indicate if you have any remaining concerns. In principle, each reviewer should provide sufficient explanations and evidence to justify their rating and confidence level.\\n\\nReviewers 7gyf and hzx1, could you please read the authors' responses and post your messages on your opinions?\\n\\nBest,\\n\\nAC\"}"
]
} |
E6rpTruK4v | CodeUnlearn: Amortized Zero-Shot Machine Unlearning in Language Models Using Discrete Concept | [
"YuXuan Wu",
"Bonaventure F. P. Dossou",
"Dianbo Liu"
] | Language Models (LMs) offer extensive knowledge across various domains, but they may inadvertently memorize sensitive, unauthorized, or malicious data, such as personal information in the medical and financial sectors. Machine unlearning methods aim to remove specific information from models after training to address this. However, current approaches require additional model training or struggle to effectively erase particular data points and their associated context due to LMs' complex, dense, and continuous nature. In this study, we propose a novel amortized unlearning approach using codebook features and Sparse Autoencoders (SAEs). By leveraging a bottleneck to decompose the activation space and regulate information flow, our method efficiently unlearns targeted information while preserving the model's performance on unrelated data. To the best of our knowledge, this is the first work that successfully enables unlearning specific topics with contextual relevance in an LM, marking a significant step towards real-world applications of machine unlearning. | [
"machine unlearning",
"discrete representation",
"AI safety",
"LLM"
] | Reject | https://openreview.net/pdf?id=E6rpTruK4v | https://openreview.net/forum?id=E6rpTruK4v | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"x8g9Kgn7sx",
"t7mrFtsHo6",
"t6sgiC2e9L",
"rDKBQ3cShV",
"qwLk87d7f1",
"pjJmbMpQMo",
"mb6VVkaI5V",
"mUfP2cKKBn",
"mMQGseQrUN",
"cH7c9lmakn",
"Zwv0PQcZEw",
"ZfCPSnmrKS",
"Z7nUZRe1vw",
"QiAhVWwhCB",
"QOVnyffUOm",
"NW2o7rhPwa",
"IZNQhuGU3y",
"3GgSGsQsUd",
"1AANb6WSAT"
],
"note_type": [
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1731001172923,
1732793054097,
1731094240550,
1733214339749,
1733155476146,
1733128761582,
1737523748262,
1732785926188,
1731663779928,
1734388189503,
1732793011350,
1729282793337,
1731053883050,
1733128735936,
1732879859942,
1733128929949,
1733128390483,
1729751954472,
1731662736150
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission6173/Reviewer_UJcb"
],
[
"ICLR.cc/2025/Conference/Submission6173/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6173/Reviewer_cUmv"
],
[
"ICLR.cc/2025/Conference/Submission6173/Reviewer_zjA8"
],
[
"ICLR.cc/2025/Conference/Submission6173/Reviewer_UJcb"
],
[
"ICLR.cc/2025/Conference/Submission6173/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission6173/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6173/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6173/Area_Chair_EgHv"
],
[
"ICLR.cc/2025/Conference/Submission6173/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6173/Reviewer_gbpz"
],
[
"ICLR.cc/2025/Conference/Submission6173/Reviewer_27V7"
],
[
"ICLR.cc/2025/Conference/Submission6173/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6173/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6173/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6173/Authors"
],
[
"ICLR.cc/2025/Conference/Submission6173/Reviewer_zjA8"
],
[
"ICLR.cc/2025/Conference/Submission6173/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes a zero-shot unlearning method for language models using the concept of a codebook. The idea appears novel and is expected to be effective in unlearning. However, there are some questionable aspects in the model design. Additionally, the evaluation lacks comparisons with existing methods, raising concerns about the practicality of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The idea of integrating the concept of a codebook into machine unlearning seems novel and sound.\", \"weaknesses\": \"1. The proposed method requires a special architecture and is not applicable to existing large language models (LLMs).\\n2. The methodology is unclearly structured and described (see Questions 1\\u20137).\\n3. There is a lack of comparison with existing unlearning methods (see Questions 8-9).\\n4. There is insufficient analysis proving the benefit of the codebook concept (see Questions 10-11).\", \"questions\": \"**Method**\\n\\n1. **Relationship Between Sections 3.1 and 3.3**: What is the relationship between Section 3.1 (Equations 1\\u20133) and Section 3.3 (Equations 4\\u20137)? It appears that the only difference is the inclusion of two additional linear layers for encoding and decoding. It is unclear how the process in Section 3.1 is utilized in the overall pipeline beyond Section 3.3. What is the purpose of Section 3.1?\\n\\n2. **Differentiability of Code Selection**: In the code selection process, the use of *argtopk* would cut off the gradient. How did you make this process differentiable to enable model training?\\n\\n3. **Sensitivity to $S$ and $S'$**: The performance seems sensitive to the choice of $S$ and $S'$, while $S$ is set to 8 according to Appendix A. Is this number sufficient to represent the complex context of a long input consisting of at least 512 tokens? Additionally, as shown in the evaluation results, the trade-off between performance and unlearning success is highly variable. How can a user choose an appropriate $S$ and $S'$ in practice?\\n\\n4. **Security Through ReLU**: In the \\\"Security through ReLU\\\" section (Section 3.3), why do you believe there would be information leakage in the encoding/decoding process that consists of a single linear layer? Can you provide a scenario where data integrity is compromised during the unlearning process without ReLU? How does ReLU mitigate this issue?\\n\\n5. **L1 Penalty and Sparsity**: Why do you think the L1 penalty term promotes sparsity? Given that the code selection process uses cosine similarity, there might be a possibility that the scale of each code vector decreases, but this does not necessarily lead to sparsity.\\n\\n6. **Requirement of $D_T$ and $D_\\\\tilde{T}$**: Does a user always need to prepare both $D_T$ and $D_\\\\tilde{T}$ for unlearning?\\n\\n7. **Motivation for Using Equation 14**: What is the motivation for using Equation 14 as a description of enrichment? Are there other metrics that could avoid low-frequency scenarios without requiring an additional chi-squared test?\\n\\n\\\\\\n**Evaluation**\\n\\n8. **Unlearning Performance Metrics**: Is \\\"Normalized Improvement Drop\\\" a commonly used metric for measuring unlearning performance? Are there standard metrics or benchmarks for assessing unlearning performance used in the papers of related works section?\\n\\n9. **Comparison with Existing Methods**: Please provide a comparison with other existing unlearning methods. The methods mentioned in the related works section would be ideal candidates for this comparison.\\n\\n10. **Quality of the Learned Codebook**: Have you verified the quality of the learned codebook?\\n\\n11. **Relationship Between $D_T$ Quality/Size and Performance**: Have you investigated how the quality and size of $D_T$ affect unlearning performance? There may be additional interesting analyses to explore in this area.\\n\\n\\\\\\n**Minor Questions & Suggestions:**\\n\\n12. **Placement of Code Selection Process**: Locating the code selection process after the residual connection is an important design consideration to prevent information leakage, but this is not mentioned in the main text (only in the caption of Figure 1). Could you elaborate on this in the paper?\\n\\n13. **Complexity of Encoding/Decoding Layers**: Do you think a single linear layer with ReLU is sufficient for sparse encoding and decoding? Have you experimented with increasing the number of layers?\\n\\n14. **Placement of Section 3.2**: It might be more appropriate to include Section 3.2 in the Related Work section. In the Method section, focusing on why the paper selects a single codebook and uses $S>1$ might be sufficient.\\n\\n15. **Understandability of the Example**: The provided example is difficult to understand without knowledge of French. Consider using an example that is accessible to a broader audience.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer UJcb (2)\", \"comment\": \"---\\n\\n### Minor Questions & Suggestions\\n12. **Placement of Code Selection Process** \\n Thank you for pointing this out. We have elaborated on the placement of the codebook transformation after the residual connection in the main text. \\n14. **Placement of Section 3.2** \\n We agree with your suggestion and have moved Section 3.2 to the Related Work section for better flow and to maintain focus in the Method section.\\n15. **Understandability of the Example** \\n We have added detailed explanations for the examples presented, including translations and clarifications for non-French-speaking readers.\\n---\\nAbout 9,10,13, we will reply later. Thank you for your patience and understanding.\\nWe hope these responses clarify part of your concerns. Thank you again for your thoughtful feedback, which has greatly improved the quality and clarity of our work.\"}",
"{\"summary\": \"This paper propose a method for training a language model that is able to \\\"unlearn\\\" specified topics. The method involves using a sparse auto-encoder, aka codebook, to disentangle the representation learned in attention layers. The unlearning is achieved by removing the learned codes linking mostly to the targeted topics.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The studied problem is interesting and meaningful\\n\\n2. The paper focuses on generative machine translation tasks, not just discriminative classification\", \"weaknesses\": \"This paper has significant issues with the technical soundness and presentation / writing clarity. Details are as follows:\", \"regarding_the_method\": \"1. The paper did not mention what kind of LLM is compatible. Only in line 370, it mentions \\\"a large language model\\\", without concretely specifying it. Suppose the method is for normal multi-transformer layer LLMs, then which transformer layer(s) is the single bottleneck inserted into?\\n\\n2. How to prevent the learned codes from collapse? There is no supervision signal to guide the learning of disentangled codes. Since interpretability is emphasized in the paper, how to ensure that the learned codes are for topics but not for other task-related semantics?\\n\\n3. For retrieving the codes for unlearning, it seems there is a need to create a controlled dataset. How is this dataset generated? If we can directly generate such dataset, why we need the proposed method for unlearning? Is the dataset only for training or for inference also?\\n\\n4. Since the proposed method requires a. joint training, and b. an extra controlled dataset for retrieval, how can the method be termed as \\\"Zero-shot\\\" as reflected in the title? There should be further detailed explanation on it in the paper.\\n\\nRegarding experiments\\n\\n5. The experiment section needs significant improvement. There is no concrete experiment settings. What LLMs, tasks, datasets is the method tested on? What is the statistics of the dataset? What does each experiment tell us? Currently all the analyses are mixed together without subsections.\\n\\n6. Key experiments are missing. The paper needs systematic experiments on ablation, parameter sensitivity study, case studies, and most importantly, analyses on the learned codebook. \\n\\nRegarding clarity\\n\\n7. The writing of the paper is not clear, many key details are not clarified, such as those mentioned point 1.\\n\\n8. In the one single case study, it's unable to be understood for readers who does not know the targeted language. Explanation is needed in the caption.\\n\\n9. The results should be put closer to the corresponding analyses.\", \"questions\": \"Please see above. Significant revision is recommended for this paper before re-submission.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comment from Reviewer zjA8\", \"comment\": \"Dear authors,\\n\\nThanks for your detailed responses. \\n\\nHowever, my concerns remain. Namely, what if we cannot collect all the information required to \\\"un-learn\\\"? Why is Sparse Autoencoder employed here, and why does it work? Thus, I will maintain my score. I hope the authors can resolve these issues in their revision. \\n\\nBest regards,\\nReviewer zjA8\"}",
"{\"title\": \"Response to Authors\", \"comment\": \"Thank you for your efforts in addressing my concerns. While I agree with most of your responses, some issues remain unaddressed.\\n\\n\\\\\\n**1. Sensitivity to $S$ and $S'$**: This point was not addressed, but I believe this sensitivity is a crucial aspect, particularly in practical applications.\\n\\n**2. Requirement of $D_T$ and $D_\\\\tilde{T}$**: Requiring a dataset for every unlearning process imposes a significant burden on users. Although the paper emphasizes that the proposed method is \\\"zero-shot,\\\" it appears closer to \\\"weakly supervised.\\\"\\n\\n**3. Comparison with Existing Methods**: This remains the most critical concern that needs to be addressed.\\n\\n**4. Quality of the Learned Codebook**: There is still no verification that the codebooks contain meaningful context.\\n\\n\\\\\\nThe authors mentioned that they would address concerns 3 and 4, but I have not seen a response to these points yet. I believe addressing the above concerns is necessary for this paper to establish technical novelty and demonstrate performance improvements. Therefore, I will maintain my score.\"}",
"{\"title\": \"Response to Reviewer gbpz (3)\", \"comment\": \"### 4. **Risk of Unintentional Information Loss**\\nWe agree that unintended removal of valuable information is a critical challenge in unlearning. Our method inherently introduces a trade-off between unlearning the target topic and preserving unrelated information. For instance:\\n- The normalized improvement drop metric indicates that non-topic content does experience minor degradation, although it is significantly less than the target topic.\\nMitigating these risks is an open research problem. Future iterations of our method could incorporate context-aware mechanisms to refine the selection of codes for deletion, ensuring that critical, non-topic-related semantics are retained.\\n---\\nThank you again for your insightful questions and suggestions. We believe these additions and clarifications significantly enhance the clarity and comprehensiveness of the manuscript.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Response to Reviewer cUmv (1)\", \"comment\": \"We are very grateful for your time and respect in reading our paper, and thank you for your thoughtful and constructive feedback. Your comments have provided us with great help in revising the paper and experiments. Whether or not our paper is accepted, we are happy to receive your comments.\", \"we_have_revised_the_document_extensively_in_response_to_your_comments_and_address_the_following_concerns\": \"### Writing and Presentation:\\n1. **Clarification of Model, Task, and Dataset (Points 1, 5, 7, 9):**\\n - We have revised the experiment and result sections to clearly specify the model architecture, task, and dataset. The LLM used in our experiments is now explicitly described, including details about the transformer layers and their interaction with the codebook.\\n - At the same time, we adjusted the structure of the results section to better display.\\n\\n2. **Sample Analysis (Point 8):**\\n - For the sample, we added caption to help better understand.\\n### Methodological Improvements:\\n3. **Control Dataset (Point 3):**\\n - The control dataset is generated from the training set by replacing keywords in the target topic dataset with unrelated terms while maintaining the original context. This dataset is only used during the search phase and is entirely derived from the training set, avoiding reliance on any additional or external datasets.\\n - We clarified that the test dataset, including novel prompts, is not involved in either the training or unlearning phases. We simulate a practical usage scenario where a user may wish to forget specific information after completing the training with the available training set.\\n\\n4. **Zero-shot Unlearning (Point 4):**\\n Your comments are quite correct and we are very sorry for the confusion.\\n - We have clarified the concept of \\\"zero-shot\\\" unlearning in our context. The term refers specifically to the forgetting phase, where no additional data or retraining is required. The data for unlearning is drawn exclusively from the training set, and no gradient-based parameter updates are performed during unlearning.\\n - While the method still relies on initial training, we emphasize that after this step, the unlearning process operates in a zero-shot manner. This provides a foundation for exploring more advanced zero-shot methods in future work. We have also added a discussion in the appendix regarding potential approaches to omit the initial training step in future research.\\n\\n### Future Work:\\n - We acknowledge that the initial training phase is still a dependency and limit the \\\"zero shot\\\" range to the forgetting phase. So we added a future work section to illustrate the potential in this direction as well as our shortcomings.\\n\\n### Additional Time for Experiments:\\nFor the remaining 2 and 6, we will reply later. Thank you for your patience and understanding.\\n\\nWe are grateful for your valuable feedback, which has helped refine the paper further.\"}",
"{\"title\": \"Response to reviewer zjA8\", \"comment\": \"Thank you for your detailed feedback and for highlighting both the strengths and weaknesses of our work. We would like to clarify some points and seek further elaboration.\\n\\n### 1, Machine Unlearning vs. Machine Learning:\\nOur research focuses on machine **un-learning**, specifically removing or neutralizing the influence of targeted information in LLMs. The process ensures the model no longer retains or utilizes sensitive knowledge for relevant tasks. If there are specific aspects of our approach that seem unclear in this regard, we would appreciate further clarification.\\n\\n### 2, Use of Sparse Autoencoders:\\nThe Sparse Autoencoder (SAE) acts as a bottleneck to regulate the flow of information, helping isolate and disentangle sensitive features for unlearning. We think there should be additional ablation experiments here, please give us some time.\\n\\n### 3, Leveraging Sensitive Knowledge:\\nYou raised an interesting question regarding the use of sensitive knowledge during unlearning. Our approach is for models to be able not to exploit private or dangerous data to avoid negative exploitation. We would like you to clarify any further questions you may have about this location.\"}",
"{\"metareview\": \"This paper tackles the crucial problem of machine unlearning, aiming to remove sensitive information from trained models. While the proposed method shows promise, the paper suffers from significant weaknesses in its presentation and analysis.\\n\\nAll reviewers and the AC acknowledge the importance of addressing machine unlearning. However, the paper's writing needs substantial improvement to ensure clarity and readability. Furthermore, the authors fail to adequately position their work within the existing literature, lacking a clear discussion of related work and the specific contributions of their approach.\\n\\nDespite attempting to address reviewer concerns during the rebuttal period, several issues remain unresolved. These include questions about the method's applicability to general large language models, concerns about claims of zero-shot learning that seem to require some data, and the lack of comparisons with alternative approaches.\\n\\nDue to these shortcomings, I recommend rejecting this paper. The authors need to significantly revise the manuscript to improve clarity, provide a thorough analysis of related work, and address the outstanding concerns regarding the method's applicability, data requirements, and comparative performance.\", \"additional_comments_on_reviewer_discussion\": \"I acknowledge the authors' efforts in addressing some of the reviewers' concerns and improving the clarity of the paper during the rebuttal phase. However, several critical issues remain unresolved and require attention before this paper can be considered for publication.\\n\\nSpecifically, the authors need to provide a more thorough comparison with alternative methods and address fundamental concerns regarding the proposed approach. These revisions are essential for strengthening the paper and ensuring its contribution to the field. I urge the authors to carefully consider these remaining concerns and revise the manuscript accordingly.\"}",
"{\"title\": \"Response to Reviewer UJcb (1)\", \"comment\": \"Thank you for your thorough and insightful review. Your comments and suggestions have significantly helped us improve the clarity and technical soundness of the paper. We are grateful for the opportunity to address your concerns and explain our revisions. Below are detailed responses to your comments:\\n\\n### Method\\n\\n1. **Relationship Between Sections 3.1 and 3.3**\\n Thank you for pointing this out. Upon further consideration, we agree that Section 3.3 introduced redundancy. We have now simplified the explanation in Section 3.3 and made its relationship with Section 3.1 more explicit. Section 3.1 describes the foundation of the codebook transformation, while Section 3.3 extends this by introducing the encoder-decoder structure to improve sparsity and enhance the representation's interpretability and effectiveness for unlearning.\\n\\n2. **Differentiability of Code Selection**\\n This is a very interesting question. The `topk` operation filters indices, which are then used to select vectors from the codebook for output and loss calculation. However, the indices themselves do not participate in the gradient computation. We realize that the implementation might give an impression of gradient flow through the indices, but in practice, the gradient is computed only for the selected vectors. Thank you for raising this question, as it allowed us to clarify this aspect.\\n\\n4. **Security Through ReLU**\\n Thank you for pointing out the confusion regarding this explanation. Upon reflection, we realize that our initial representation of \\\"security through ReLU\\\" was both unclear and misaligned with the core methodology. We have removed the section. We appreciate your feedback, which helped us refine the clarity and relevance of this section.\\n\\n5. **L1 Penalty and Sparsity** \\n Thank you for pointing this out. We refer to [Adly Templeton et al. (2024)](https://transformer-circuits.pub/2024/scaling-monosemanticity/) and [Samuel Vaiter et al. (2012)](https://arxiv.org/abs/1109.6222), which discuss how the L1 penalty promotes sparsity in autoencoders and signal reconstruction, respectively. Based on these works , we decided to use L1 as the penalty term.\\n\\n6. **Requirement of \\\\\\\\(D_T\\\\\\\\) and \\\\\\\\( D_{\\\\\\\\tilde{T}} \\\\\\\\)** \\n Yes, While \\\\\\\\(D_T\\\\\\\\) and \\\\\\\\( D_{\\\\\\\\tilde{T}} \\\\\\\\)are necessary for the current implementation, they are generated from the training data without requiring external data. This ensures a self-contained unlearning process. Future work may focus on reducing dependency on these datasets, such as labeling code information during training to enable direct unlearning.\\n\\n7. **Motivation for Using Equation 14** \\n You're right, it's indeed a problem. There are alternative metrics for measuring enrichment, such as the Kullback-Leibler (KL) divergence, log odds ratio, or weighted frequency ratios. However, we chose the enrichment metric in Equation 14 due to its simplicity and interpretability. Specifically, the log-transformed ratio effectively highlights codes with a substantial difference in activation frequencies between \\\\\\\\( D_T \\\\\\\\) and \\\\\\\\( D_{\\\\\\\\tilde{T}} \\\\\\\\), making it a straightforward tool for identifying enriched codes. \\nWhile the chi-squared test is an additional step, it serves to ensure statistical robustness by accounting for low-frequency scenarios where spurious activations might otherwise skew the results.\\nTherefore, we acknowledge that exploring more sophisticated and potentially computationally efficient methods for code enrichment analysis is an important future research direction. Metrics that naturally handle low-frequency scenarios without supplementary tests could streamline the process further and enhance scalability.\\n---\\n\\n### Evaluation\\n8. **Unlearning Performance Metrics** \\n To the best of our knowledge, standard metrics for LLM unlearning are still evolving. Related works often use BLEU or BERTScore (e.g., [Sijia Liu et al. (2024)](https://arxiv.org/abs/2402.08787)). However, since we evaluate both topic and non-topic performance, absolute metrics might misrepresent the results. Thus, we proposed the normalized improvement drop to provide a clearer and less biased measure of unlearning effectiveness. Using the zero-shot model as a baseline provides a more intuitive and straightforward representation.\\n\\n11. **Relationship Between Codebook Quality/Size and Performance** \\n We acknowledge this as an interesting and important area for exploration. Preliminary analyses suggest that codebook size and quality significantly impact unlearning performance, and we plan to include more detailed investigations in future work.\"}",
"{\"summary\": \"The paper presents a novel approach called \\\"CodeUnlearn\\\" for zero-shot machine unlearning in LLMs. The primary contribution is leveraging codebook features combined with sparse autoencoders (SAEs) to achieve efficient, targeted removal of specific information from models without the need for retraining. This method addresses the challenges of handling complex language tasks, preserving model performance while selectively unlearning sensitive or unwanted data.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a new method using discrete representations (codebook features), which is a step forward in the machine unlearning space, particularly for LLMs.\\n\\n2. The amortized zero-shot unlearning technique scales well with large models, unlike traditional retraining-based methods that are computationally expensive and inefficient.\\n\\n3. The paper presents experimental results with various metrics (e.g., BLEU, METEOR, BERTScore) to assess the unlearning procedure's effectiveness across different topics.\", \"weaknesses\": \"1. The writing quality is poor, with typos, errors, and incomplete sentences.\\n\\n2. The paper lacks thorough and empirical comparisons with other machine unlearning methods, including zero-shot unlearning techniques.\\n\\n3. The evaluation focuses heavily on metrics like BLEU and BERTScore, which may not capture all dimensions of model quality, such as fluency or overall task accuracy after unlearning.\\n\\n4. There are no ablation studies to evaluate the importance of different components in the unlearning pipeline, making it hard to assess which part of the method contributes most to its success.\\n\\n5. The paper lacks specific details on how the codebook and sparse autoencoder (SAE) are implemented, making it difficult to reproduce the experiments.\\n\\n6. The discussion lacks sufficient consideration of the risks of unintentionally removing valuable information during the unlearning process. The procedure may negatively impact semantically related concepts (e.g., unlearning \\\"love\\\" also affecting performance on \\\"like\\\").\", \"questions\": \"1. Can more baselines, including zero-shot unlearning methods, be added to highlight the method's comparative effectiveness?\\n\\n2. Could additional metrics, like human evaluations or task accuracy, be used to better capture fluency and performance post-unlearning?\\n\\n3. Can the authors provide ablation studies to clarify the impact of individual components?\\n\\n4. Could the authors help better understand the risks of unintentionally removing valuable information during the unlearning process?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a zero-shot machine unlearning method to remove sensitive or unwanted data from a model without retraining. By using discrete representations and sparse autoencoders, it structures the latent space to enable targeted information removal while preserving model performance on unrelated data. Tis paper claims to be the first effective method for unlearning contextually specific topics in LLMs, aiming to make unlearning more scalable and practical.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper introduces a zero-shot unlearning approach that leverages vector quantization and discrete representations, enabling targeted information removal without retraining and enhancing scalability and efficiency.\", \"weaknesses\": \"The paper makes claims about unlearning in large language models (LLMs) but only evaluates its approach on sparse autoencoders rather than actual LLMs, raising questions about its applicability to LLMs as it stated. Additionally, it asserts novelty as \\\"the first work that successfully enables unlearning specific topics with contextual relevance,\\\" yet overlooks significant existing research in machine unlearning. This overstatement of novelty, along with the lack of relevant evaluations, weakens the paper's contributions and claims.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer gbpz (1)\", \"comment\": \"Thank you for raising these important points. We appreciate the opportunity to clarify and address your concerns. Below is our detailed response to your queries:\\n\\n### 1.more baselines\\nWe acknowledge that including more baselines, especially zero-shot unlearning methods, would strengthen the paper. However, most existing unlearning methods are tailored for classification tasks, making them challenging to adapt for generative and context-heavy tasks like ours. This limitation partly influenced our decision to omit a detailed comparative analysis. We recognize the importance of this and will explore broader baseline implementations in future work to better contextualize our method's effectiveness. \\n### 2. **Additional Metrics**\\nTo the best of our knowledge, standard metrics for evaluating unlearning in LLMs are still evolving. Related works commonly use BLEU and BERTScore for measuring performance changes (e.g., [Sijia Liu et al., 2024](https://arxiv.org/abs/2402.08787)). In our case, we needed to evaluate the model\\u2019s performance on both topic and non-topic content. Absolute metrics can be biased or misleading in such scenarios, so we proposed the **Normalized Improvement Drop** metric to provide a clearer, less biased assessment of unlearning effectiveness. Using the zero-shot model as a baseline offers an intuitive representation of performance changes.\\nWe acknowledge that human evaluations or task accuracy measures could better capture fluency and performance post-unlearning. These additional metrics will be considered in future studies to enrich our analysis.\\n\\n---\"}",
"{\"title\": \"Response to Reviewer cUmv (2)\", \"comment\": \"### 5. Code Collapse (Point 2)\\nThank you for raising this critical point regarding code collapse and the disentanglement of learned codes. Code collapse is indeed a known issue with vector quantization (VQ) techniques, where a small subset of the codebook becomes over-utilized while others remain underutilized or inactive.We acknowledge that the problem still exists. We observed this phenomenon in our experiments\\u2014for instance, in a search conducted with a sample size of 500, only 1168 codes out of the total codebook were activated. But fortunately, our unlearning methodology remained effective, as demonstrated in the results.\\nWe believe that enhancing the disentanglement of learned codes could further improve the interpretability and effectiveness of the unlearning process.\"}",
"{\"title\": \"Response to Reviewer gbpz (2)\", \"comment\": \"### 3. **Ablation Studies**\\n| **Topic (N)** | **Activation** | **BLEU\\u2193 (Normalized Improvement Drop%)** | **METEOR\\u2193 (Normalized Improvement Drop%)** | **BERT-P\\u2193 (Normalized Improvement Drop%)** | **BART\\u2193 (Normalized Improvement Drop%)** |\\n|--------------------|----------------|-------------------------------------------|---------------------------------------------|---------------------------------------------|-------------------------------------------|\\n| **Love (207)** | ReLU | 0.16 **_(-112.52)_** | 0.39 **_(-117.76)_** | 0.80 **_(-118.88)_** | -4.80 **_(-143.96)_** |\\n| | Linear | 0.18 _(-89.24)_ | 0.41 _(-88.40)_ | 0.80 _(-88.71)_ | -4.67 _(-78.30)_ |\\n| **Julien (255)** | ReLU | 0.19 **_(-113.12)_** | 0.42 **_(-138.47)_** | 0.80 **_(-134.60)_** | -5.15 **_(-164.68)_** |\\n| | Linear | 0.21 _(-88.75)_ | 0.46 _(-94.74)_ | 0.81 _(-84.55)_ | -5.03 _(-128.70)_ |\\n| **Captain (137)** | ReLU | 0.20 _(-72.10)_ | 0.47 _(-140.71)_ | 0.83 _(-84.44)_ | -5.16 **_(-87.90)_** |\\n| | Linear | 0.21 **_(-95.43)_** | 0.45 **_(-157.65)_** | 0.83 **_(-100.37)_** | -5.15 _(-85.57)_ |\\n| **Poor (151)** | ReLU | 0.18 **_(-70.61)_** | 0.43 **_(-70.78)_** | 0.81 **_(-60.84)_** | -5.03 _(-79.81)_ |\\n| | Linear | 0.19 _(-61.57)_ | 0.43 _(-64.80)_ | 0.82 _(-36.18)_ | -5.08 **_(-100.39)_** |\\n| **Wish (217)** | ReLU | 0.15 **_(-144.83)_** | 0.33 **_(-249.51)_** | 0.78 **_(-182.02)_** | -4.95 _(-309.34)_ |\\n| | Linear | 0.17 _(-108.57)_ | 0.39 _(-173.86)_ | 0.80 _(-87.14)_ | -4.93 **_(-792.93)**_ |\\n| **White (179)** | ReLU | 0.12 _(-157.45)_ | 0.38 _(-218.04)_ | 0.80 **_(-403.04)_** | -4.85 **_(-119.99)_** |\\n| | Linear | 0.11 **_(-326.98)**_ | 0.36 **_(1781.90%)**_ | 0.79 _(145.11%)_ | -4.89 _(-41.09%)_ |\\n| **Black (190)** | ReLU | 0.16 **_(-85.16)_** | 0.40 _(-138.04)_ | 0.80 _(-115.56)_ | -4.70 **_(-62.91)_** |\\n| | Linear | 0.16 _(-70.03)_ | 0.39 **_(-166.23)_** | 0.80 **_(-123.45)**_ | -4.63 _(-49.53)_ |\\n\\nFrom the table, we observe that while both ReLU and linear activations lead to effective unlearning, models with ReLU activation generally exhibit more stable and consistent performance, as highlighted by their lower improvement drop percentages across metrics like BLEU, METEOR, BERT-P, and BART. Specifically:\\nReLU outperforms linear activation in terms of stability, particularly on topics like \\\"Julien\\\" and \\\"Wish,\\\" where the BLEU and METEOR improvement drops are less variable.In cases where performance metrics are critical, ReLU mitigates excessive degradation, ensuring smoother unlearning transitions.\\n\\n---\"}",
"{\"title\": \"Response to Reviewer cUmv (3)\", \"comment\": \"**Response to Ablation:**\\n\\nWe also conducted an ablation study to compare the performance of models using ReLU and linear activations under identical settings. The results are presented in the table below:\\n\\n| **Topic (N)** | **Activation** | **BLEU\\u2193 (Normalized Improvement Drop%)** | **METEOR\\u2193 (Normalized Improvement Drop%)** | **BERT-P\\u2193 (Normalized Improvement Drop%)** | **BART\\u2193 (Normalized Improvement Drop%)** |\\n|--------------------|----------------|-------------------------------------------|---------------------------------------------|---------------------------------------------|-------------------------------------------|\\n| **Love (207)** | ReLU | 0.16 **_(-112.52)_** | 0.39 **_(-117.76)_** | 0.80 **_(-118.88)_** | -4.80 **_(-143.96)_** |\\n| | Linear | 0.18 _(-89.24)_ | 0.41 _(-88.40)_ | 0.80 _(-88.71)_ | -4.67 _(-78.30)_ |\\n| **Julien (255)** | ReLU | 0.19 **_(-113.12)_** | 0.42 **_(-138.47)_** | 0.80 **_(-134.60)_** | -5.15 **_(-164.68)_** |\\n| | Linear | 0.21 _(-88.75)_ | 0.46 _(-94.74)_ | 0.81 _(-84.55)_ | -5.03 _(-128.70)_ |\\n| **Captain (137)** | ReLU | 0.20 _(-72.10)_ | 0.47 _(-140.71)_ | 0.83 _(-84.44)_ | -5.16 **_(-87.90)_** |\\n| | Linear | 0.21 **_(-95.43)_** | 0.45 **_(-157.65)_** | 0.83 **_(-100.37)_** | -5.15 _(-85.57)_ |\\n| **Poor (151)** | ReLU | 0.18 **_(-70.61)_** | 0.43 **_(-70.78)_** | 0.81 **_(-60.84)_** | -5.03 _(-79.81)_ |\\n| | Linear | 0.19 _(-61.57)_ | 0.43 _(-64.80)_ | 0.82 _(-36.18)_ | -5.08 **_(-100.39)_** |\\n| **Wish (217)** | ReLU | 0.15 **_(-144.83)_** | 0.33 **_(-249.51)_** | 0.78 **_(-182.02)_** | -4.95 _(-309.34)_ |\\n| | Linear | 0.17 _(-108.57)_ | 0.39 _(-173.86)_ | 0.80 _(-87.14)_ | -4.93 **_(-792.93)**_ |\\n| **White (179)** | ReLU | 0.12 _(-157.45)_ | 0.38 _(-218.04)_ | 0.80 **_(-403.04)_** | -4.85 **_(-119.99)_** |\\n| | Linear | 0.11 **_(-326.98)**_ | 0.36 **_(1781.90%)**_ | 0.79 _(145.11%)_ | -4.89 _(-41.09%)_ |\\n| **Black (190)** | ReLU | 0.16 **_(-85.16)_** | 0.40 _(-138.04)_ | 0.80 _(-115.56)_ | -4.70 **_(-62.91)_** |\\n| | Linear | 0.16 _(-70.03)_ | 0.39 **_(-166.23)_** | 0.80 **_(-123.45)**_ | -4.63 _(-49.53)_ |\\n\\nFrom the table, we observe that while both ReLU and linear activations lead to effective unlearning, models with ReLU activation generally exhibit more stable and consistent performance, as highlighted by their lower improvement drop percentages across metrics like BLEU, METEOR, BERT-P, and BART. Specifically:\\nReLU outperforms linear activation in terms of stability, particularly on topics like \\\"Julien\\\" and \\\"Wish,\\\" where the BLEU and METEOR improvement drops are less variable.In cases where performance metrics are critical, ReLU mitigates excessive degradation, ensuring smoother unlearning transitions.\"}",
"{\"summary\": \"This paper aims to address a critical issue in the deployment of Large Language Models (LLMs): the inadvertent memorization of sensitive or unauthorized data, a highly relevant topic, especially given the increasing use of LLMs in domains where data privacy is paramount. To this end, the authors introduce a novel amortized unlearning approach using codebook features and Sparse Autoencoders (SAEs). Finally, some experiments are conducted to verify the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The method is designed to unlearn targeted information efficiently without additional model training. This is an advantage over existing approaches that often necessitate retraining, which can be computationally expensive and time-consuming.\\n\\nThe proposed method is simple yet effective, and the experimental results are decent.\", \"weaknesses\": \"From a methodology point of view, the proposed approach is to remember what should be unlearned rather than to unlearn something. Namely, if we take the whole model as a system, no sensitive knowledge is removed, while the authors claim in the abstract section that machine learning methods aim to remove specific information.\\n\\nIt is unclear why Sparse Autoencoder is employed here and why it works.\", \"questions\": \"Leveraging sensitive knowledge to avoid utilizing sensitive information can be a bit confusing. What if the employed information is forbidden to use?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer 27V7\", \"comment\": \"Thank you for taking the time to review our work. While we appreciate your efforts, we believe there are some ambiguities in your review.. As such, we find it challenging to address your concerns effectively. We kindly request clarification on your comments to provide a more comprehensive and accurate response.\"}"
]
} |
E6kQ51yfAj | Progressive LLM Alignments Using Two-Player Games | [
"Rui Zheng",
"Hongyi Guo",
"Zhihan Liu",
"Xiaoying Zhang",
"Yuanshun Yao",
"Xiaojun Xu",
"Zhaoran Wang",
"Zhiheng Xi",
"Tao Gui",
"Qi Zhang",
"Xuanjing Huang",
"Yang Liu",
"Hang Li"
] | Alignment of large language models (LLM) is a process that ensures the model’s responses to user prompts align with human intentions and social values. This optimization typically relies on pre-collected prompts. The collection of these prompts often either requires careful human interventions or proves to be difficult to have a good coverage over all scenarios an LLM can improve over . To address this issue, we propose an alignment method based on a two-agent game, consisting of an adversarial agent and a defensive agent. The adversarial agent’s task is to generate prompts that expose the deficiencies of the defensive agent. At the same time, the defensive agent improves its performance on the prompts generated by the adversary based on feedback from the reward model. This iterative process is repeated to enhance the model’s performance. We theoretically demonstrate that, under mild assumptions, this iterative alignment process converges to a Nash equilibrium by both agents. Learning in this competitive environment results in policies with better generalization capabilities. We demonstrate the advantage of our framework using extensive experiments. | [
"large language models",
"alignment",
"safety"
] | Reject | https://openreview.net/pdf?id=E6kQ51yfAj | https://openreview.net/forum?id=E6kQ51yfAj | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"znlesDXHdv",
"zWe29NZXpo",
"v0FhhABFuR",
"taVzc7PPH1",
"nhgihmYVAS",
"mQFnsW3rmZ",
"iqX69F1Piv",
"aLYuJZhJ3j",
"TyD143aubL",
"Rnix4O69lT",
"PPh7G2BGxc",
"OgsMI2l37a",
"HlqWDaZQ83",
"FfUEQECXNs",
"1xBlOQgiab"
],
"note_type": [
"official_comment",
"official_review",
"meta_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732471783024,
1730700442104,
1734718278938,
1730501135834,
1733186437684,
1730652091104,
1732593201552,
1732472664763,
1732466532093,
1732473095081,
1729780778053,
1737523966619,
1732465810289,
1732626394448,
1732473002996
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9183/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9183/Reviewer_KfEA"
],
[
"ICLR.cc/2025/Conference/Submission9183/Area_Chair_qTGD"
],
[
"ICLR.cc/2025/Conference/Submission9183/Reviewer_F894"
],
[
"ICLR.cc/2025/Conference/Submission9183/Reviewer_KfEA"
],
[
"ICLR.cc/2025/Conference/Submission9183/Reviewer_6iyV"
],
[
"ICLR.cc/2025/Conference/Submission9183/Reviewer_MEdA"
],
[
"ICLR.cc/2025/Conference/Submission9183/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9183/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9183/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9183/Reviewer_MEdA"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9183/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9183/Reviewer_6iyV"
],
[
"ICLR.cc/2025/Conference/Submission9183/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"We sincerely thank you for providing thoughtful and constructive feedback. Based on your feedback, we have revised the statements in the method section to make them more reader-friendly and easier to understand. **The changes are marked in blue.**\\n\\n\\n**Q1: What is the motivation behind using the Rdiv term in Equation 3.1? Specifically, could you a. explain how this diversity reward relates to or enhances existing alignment objectives. b. discuss the advantages of this approach over traditional alignment methods. c. clarify the general definition or derivation of Rdiv, as its current form seems restrictive in certain sections.**\\n\\n> **Motivation behind using the Rdiv term in Equation 3.1:**\\n\\n>**Relation to alignment objectives:** The diversity reward $R_{div}(x)$ influences only the optimization process of the adversarial agent, which aims to generate prompts where the defense model underperforms. As shown in Eq. (3.1) and Eq. (3.3), the defense model is optimized based on the prompts generated by the adversarial agent, $x \\\\sim \\\\mu(\\u22c5)$. By adding the diversity reward $R_{div}(x)$, the adversarial agent is encouraged to identify a broader range of weaknesses in the defense model, facilitating the defense model\\u2019s improvement across all identified vulnerabilities. Without the diversity reward, the adversarial agent might overfit to a narrow set of prompt types, limiting the extent of the defense model's improvement.\\n\\n>**Advantages over traditional alignment methods:** Traditional alignment methods typically focus on optimizing for a fixed set of prompts or responses, which may fail to address all edge cases. In reality, we cannot control the prompts users provide; therefore, our goal is to generalize as much as possible. By incorporating diversity into the prompts, we ensure broader coverage of potential adversarial cases, thereby enhancing the model's overall performance and robustness against unexpected inputs.\\n\\n>**Clarification of general definition of $R_{div}(x)$:**\\nThe diversity reward $R_{div}(x)$ is solely related to the prompt $x$ and measures the dissimilarity of generated prompts to previous generations, encouraging the adversarial agent to produce unique prompts each time. Therefore, any similarity measure for prompts can be applied. In Section 3.2.2, we explain how to compute $R_{div}(x)$ using two text similarity measures: SelfBLEU (Eq. 3.4) and sentence embeddings (Eq. 3.5).\\n\\nIn light of your feedback, we have revised the manuscript to include this clarification, ensuring that the definition and the motivation of$R_{div}(x)$ are clearly articulated.\\n\\n**Q2: I would like further explanation on including the KL divergence term in Equation 3.3, which is absent in Equation 3.1. Could you introduce the KL divergence term when it first appears in Equation 3.3 and discuss its implications for the overall optimization process?**\\n\\n>The KL divergence term in Equation 3.3 is included to regularize the adversarial agent\\u2019s prompt generation process, in line with the Follow-the-Regularized-Leader (FTRL) algorithm, which plays a key role in theoretically ensuring that the system converges to a Nash Equilibrium. The term $\\\\text{KL}( \\\\mu_{\\\\phi}(x) \\\\| \\\\mu_{\\\\phi t-1}(x))$ penalizes the adversarial agent for making large changes to its prompt distribution across iterations, thereby maintaining stability in the training process. This ensures that the adversarial agent continues to explore new, challenging prompts while avoiding drastic shifts in its strategy .\\n\\n>The purpose of regularization is to strike a balance between exploration (generating diverse prompts) and stability (not overfitting to a narrow set of strategies), making the iterative optimization process more stable and effective.\\n\\n> We have revised the content in Section 3.1 to provide additional context.\"}",
"{\"summary\": [\"This paper proposes a 2 player adversarial zero-sum game (GPO) to develop a more robust and less toxic LLM. It consists of a defensive model that generates high quality and safe responses to the prompts generated by the adversarial agent. The adversarial agent generates prompts to try and make the defensive model generate bad or unsafe responses.\", \"As a side effect, the adversarial agent serves as a good red-teaming partner.\", \"A diversity parameter in adversarial agent\\u2019s reward ensures that a diverse set of prompts are covered during the training process\", \"GPO hows strong improvements in safety alignment.\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors showcase the effectiveness of GPO and the diversity reward across different safety datasets and attacks.\", \"GPO does not seem to harm general quality despite only being used to align the model for safety scenarios.\", \"The paper is well written and method is clearly detailed.\"], \"weaknesses\": [\"A pretrained Llama 2 7B model is used as a base, which then goes through SFT and RLHF. The data used for this isn't specified and it is unclear how the quality of the post-SFT model affect alignment. For example, [Vicuna 7B has a score of 6.00 on MT-Bench](https://lmsys.org/blog/2023-06-22-leaderboard/), which is comparable to the score post GPO.\", \"The paper largely focuses on safety alignment and it is not clear how much GPO would benefit general alignment.\", \"It is not clear how this method generalizes to larger models.\"], \"questions\": [\"The typical RLHF objective anchors to the intial reference policy. It is not clear why the GPO objective anchors to the policy from the previous step and how this affects this.\", \"Given that the anchor is updated at every step, this would result in a larger policy shift for both the defensive and adversarial agents. How does the RM perform when the prompts generated by the adversarial agent is OOD?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper presents an interesting approach (GPO) for safety alignment in language models, demonstrating promising results on a range of safety benchmarks. However, we recommend rejection due to several key limitations. While the method is clearly presented and the experiments demonstrate the effectiveness of GPO for safety, the evaluation relies on a potentially weak base model (post-SFT Llama 2 7B) whose training data is unspecified. This makes it difficult to isolate the true contribution of GPO versus improvements inherited from the pre-training. Furthermore, the limited scope of evaluation, primarily focusing on safety alignment, leaves the impact on general alignment and scalability to larger models unexplored. These weaknesses, particularly the lack of clarity regarding the base model and limited scope, hinder the overall impact and generalizability of the presented work, warranting rejection for this venue.\", \"additional_comments_on_reviewer_discussion\": \"Some concerns are addressed during rebuttal, but the paper will be benefitted with another iteration.\"}",
"{\"summary\": \"The authors in this paper address the limitations of traditional LLM alignment methods, which often rely on static prompt sets pre-collected by human labelers. Current methods need more adaptability to identify areas where LLMs require dynamic improvement.\\nThe authors propose a two-player game involving an adversarial (tutor) and a defensive (student) LLM to overcome these issues. The adversarial LLM automatically generates challenging prompts designed to expose the defensive LLM's weaknesses, pushing it to adapt and improve iteratively. The iterative adversarial alignment process is shown converge to Nash equilibrium between the adversarial and defensive agents. Moreover, they have also given an algorithm that finds the $O(1/\\\\sqrt{T})$-approximate Nash equilibrium in T iterations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The problem addressed in this paper is both exciting and novel, offering a fresh approach to LLM alignment. The analysis appears sound, and the proofs seem correct at first glance. However, I have some questions I would like to clarify, as highlighted below.\", \"weaknesses\": \"Some things need to be appropriately motivated; for example, $R_{div}(x)$ in eqn 3.1 is defined, but I need to figure out how to obtain this. Only in some sections is it defined, but that is also very restrictive, and how this will be defined or obtained in general needs to be clarified. Look at some more questions regarding this below.\\n\\nDifferent variants of the same algorithms are also hard to parse, and there needs to be a discussion about which algorithm is finally used in the theoretical analysis and why.\\n\\nThough the paper addresses a good problem, it still lacks some details, and I would like to see more clarity in the revised versions.\\n\\nMy primary concern lies with the motivation and problem formulation. The central motivation here relies on an attack prompt, assuming that an adversarial player controls the prompt distribution. This assumption seems overly strong. In practice, unless prompt optimization techniques or another language model are employed, we have limited control over the prompts users provide.\", \"questions\": \"1. What is the motivation behind using the $R_{\\\\text{div}}$\\u200b term in Equation 3.1? Specifically, could you\\na. explain how this diversity reward relates to or enhances existing alignment objectives.\\nb. discuss the advantages of this approach over traditional alignment methods.\\nc. clarify the general definition or derivation of $R_{\\\\text{div}}$\\u200b, as its current form seems restrictive in certain sections.\\n\\n2. I would like further explanation on including the KL divergence term in Equation 3.3, which is absent in Equation 3.1.\\n\\na. Could you introduce the KL divergence term when it first appears in Equation 3.3 and discuss its implications for the overall optimization process?\\n\\n3. The paper presents at least three algorithm variants, making it unclear which ones are used in the theoretical analysis and implementation and the rationale for this choice. Additionally, given these variants, what are the discrepancies observed between the theoretical and implemented versions?\\n\\n4. The existence of a Nash equilibrium (NE) is asserted based on the linearity of $J(\\\\pi, \\\\mu)$ in Equation 3.1. However, Algorithm 1 introduces KL terms in its updates, which contradicts this claim. Even without the KL term, could you explain why the $R_{\\\\text{div}}$\\u200b term in Equation 3.1 would be linear?\\n\\n5.Outside of the attack prompting scenario, it needs to be clarified why minimization over the prompt distribution is necessary. Please clarify specific use cases where the user has the flexibility to control the prompt distribution.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Thank you to the authors\", \"comment\": \"Thank you for your response! Many of my questions have been answered.\\n\\n> Our primary objective in this work is to validate the feasibility of the two-agent game framework for alignment. Safety, in this context, is particularly well-suited to evaluation via the reward model (RM) because the RM can robustly judge whether a model's response adheres to safety norms. \\n\\nI agree that safety alignment lends itself well to this setup and the safety results you have provided are encouraging. However it doesn't demonstrate that GPO extends well to alignment in general. To further validate the feasibility of the two-agent game framework for alignment more broadly, it would be valuable to see how this method generalizes across model sizes, especially since GPO is more computationally intensive than typical alignment methods.\\n\\nFurther ablations on model sizes and a comparison with existing alignment methods would provide a more comprehensive understanding of the impact and potential of GPO.\"}",
"{\"summary\": \"This work proposes to use two-player zero-sum games to perform LLM alignment in safety-critical scenarios. This method iteratively trains a defensive agent and an adversarial agent in turn to adaptively generate progressively harder prompts. The authors provide a theoretical analysis of the method's convergence to Nash equilibrium and perform experiments to show the effectiveness of the method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The idea of using self-play in a two-player zero-sum game to improve LLM alignment is novel and intuitive.\\n2. The manuscript is well-organized and easy to follow. The main idea of using self-play and introducing diversity is well-explained.\\n3. The authors provide theoretical analysis as well as experiment results to show the effectiveness of their method.\", \"weaknesses\": \"1. More comprehensive evaluation.\\n 1. The theoretical result claims that the proposed method can converge to a Nash equilibrium, but there are no experiment results validating this claim. I would suggest the author use metrics like exploitability or NashConv to evaluate how far the current agents are from Nash equilibrium.\\n 2. The main results in Table 1, and 2 only show performance under a certain amount of training. A more comprehensive evaluation is to show the method's performance curve w.r.t training amount, e.g., the performance curve w.r.t. GPO iteration. This could better compare GPO with baselines like RLHF to show the effect brought by self-play training and show the progressive improvement process of GPO.\\n2. Need for ethics and social impact statement: this method trains a defensive agent as well as an adversarial agent. Although the authors discuss that the adversarial agent can be utilized for red teaming, it can also be potentially used to make attacks and induce harmful behaviors. However, the authors claim \\\"this work does not involve potential malicious or unintended uses ...\\\" in the ethics statement. I would suggest the authors add necessary discussions on how to prevent potentially harmful use of their method.\", \"questions\": \"Please see Weaknesses.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"This method trains a defensive LLM agent as well as an adversarial LLM agent in safety-critical tasks. The adversarial agent can be potentially used to make attacks and induce harmful behaviors of LLMs and the authors do not address these potential ethics problems in their original submission.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for the reponse! Could you clarify what is specifically meant by \\\"shared resources\\\" in this context? Which modules or computational processes are being shared between the two agents, and how is this sharing implemented? Additionally, how is PPO utilized efficiently in this dual-policy framework? Could you provide a detailed description of the training workflow, including specific steps and optimizations employed to manage computational overhead?\"}",
"{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**Q3: The paper presents at least three algorithm variants, making it unclear which ones are used in the theoretical analysis and implementation and the rationale for this choice. Additionally, given these variants, what are the discrepancies observed between the theoretical and implemented versions?**\\n\\n> Algorithm 1 is our practical implementation used for experiments. Algorithms 2 and 3 are theoretical variants that differ from Algorithm 1 in two ways: their output policy generation and diversity treatment. While Algorithms 2 and 3 yield the mean policy (common for theoretical convergence analysis), Algorithm 1 yields the final policy, which is more practical and convenient.\\nSince it is challenging to theoretically analyze the importance of the diversity score with a general diversity reward $R_{\\\\rm div}(x)$ as defined in Algorithm 1, we introduce Algorithm 3, which uses entropy as the diversity reward. We demonstrate that incorporating diversity constraints leads to a more varied prompt distribution, while the absence of the entropy regularizer causes the adversarial agent to converge to a single-point prompt distribution.\\n\\n>We **have included** the above discussion in **Appendix A.4**.\", \"q4\": \"The existence of a Nash equilibrium (NE) is asserted based on the linearity of J(\\u03c0,\\u03bc) in Equation 3.1. However, Algorithm 1 introduces KL terms in its updates, which contradicts this claim. Even without the KL term, could you explain why the Rdiv term in Equation 3.1 would be linear?\\n\\n>Algorithm 1 outlines how we optimize the objective in Eq. 3.1. As shown in the Follow-the-Regularized-Leader (FTRL) algorithm, it is common to add regularizers when optimizing an objective to achieve smoother training or enforce desired properties. Importantly, even with the regularizers, the objective remains concave for the max player and convex for the min player, ensuring that a Nash Equilibrium (NE) still exists.\\n\\n>Linearity of x: Here we consider the objective is linear or not in $\\\\mu$ which is a function on X. Here we can write $E_\\\\mu[R_\\\\text{div}(x)] = <\\\\mu, R_\\\\text{div}>$ which indicates this term is linear in $\\\\mu$.\\n\\n**Q5: Outside of the attack prompting scenario, it needs to be clarified why minimization over the prompt distribution is necessary. Please clarify specific use cases where the user has the flexibility to control the prompt distribution.**\\n\\n>Even beyond the attack prompting scenario, the underlying motivation and benefits of the two-player game alignment framework remain valid: we aim for the aligned model to generalize well. By minimizing the prompt distribution, the adversarial agent identifies areas where the current aligned model (the defense model) underperforms. The defense agent can then focus on improving itself in these identified weak areas.\\n\\n>**Use cases where users control prompt distribution**: In practical scenarios, certain actors (such as malicious users or adversarial agents) may have control over the inputs to the model, and the defensive agent needs to be robust to such varied distributions. This assumption allows us to model scenarios where an adversarial player might shape the prompts to exploit weaknesses in the defensive agent.\\n\\n>**Clarification**: In real-world applications, user-controlled prompt distributions may arise in contexts such as user-generated content (e.g., inputs in conversational AI or interactive systems) or adversarial testing scenarios where prompts are crafted to challenge the model's behavior. This mechanism allows the defensive model to better handle unexpected and potentially harmful inputs, ensuring more generalizable robustness .\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"We sincerely thank the reviewer for providing valuable feedback. We have included discussions on safeguards and considerations for responsible use to ensure that our method is applied ethically and avoids any unintended harmful consequences. **The changes are marked in blue.**\\n\\n**Q1: More comprehensive evaluation.The theoretical result claims that the proposed method can converge to a Nash equilibrium, but there are no experiment results validating this claim. I would suggest the author use metrics like exploitability or NashConv to evaluate how far the current agents are from Nash equilibrium.**\\n\\n>We can use the Nash gap to evaluate the convergence to the Nash equilibrium. The Nash gap is defined in equation (3.6). We can approximate the Nash gap in practice by measuring the gap between the harmful rate (or harmful score) of two iterations that optimizes the defensive policy and the attacking policy. Through an image similar to Figure 2-(c), we are able to observe the performance variations between different iterations. This enables us to ensure that the performance of our method reaches a point where it can no longer be further enhanced, signifying the attainment of convergence.\\n\\n**Q2: The main results in Table 1, and 2 only show performance under a certain amount of training. A more comprehensive evaluation is to show the method's performance curve w.r.t training amount, e.g., the performance curve w.r.t. GPO iteration. This could better compare GPO with baselines like RLHF to show the effect brought by self-play training and show the progressive improvement process of GPO.**\\n\\n>In Figure 2-(c) of the paper, we have presented the performance progression of our method (GPO) alongside baselines like RLHF. This figure demonstrates how the safety performance evolves as the training iterations increase. Notably, we observe that while RLHF's safety performance plateaus after a certain number of iterations, our method, continues to improve beyond this point. Specifically, the model becomes even more robust to attacks, as evidenced by the lower attack success rate.\\n\\n>This demonstrates that the two-player game dynamics introduced by GPO drive continuous enhancements in safety alignment throughout training, ultimately resulting in a robust defense model and a strong attacker model, particularly on OOD datasets.\\n\\n**Q3: Need for ethics and social impact statement: this method trains a defensive agent as well as an adversarial agent. Although the authors discuss that the adversarial agent can be utilized for red teaming, it can also be potentially used to make attacks and induce harmful behaviors. However, the authors claim \\\"this work does not involve potential malicious or unintended uses ...\\\" in the ethics statement. I would suggest the authors add necessary discussions on how to prevent potentially harmful use of their method.**\\n\\n>We thank the reviewer for the valuable suggestion. We acknowledge that our initial ethics statement may not have fully addressed the potential risks associated with the adversarial agent. We have revised the statement to provide clearer guidance on the possible consequences of the method. \\n\\n>It is important to emphasize that the primary goal of our research is to demonstrate the effectiveness of alignment through a two-player gaming framework, specifically designed to produce a safe LLM (defense model) that is robust to various attacks. The adversarial agent serves as a critical component during training; however, since both the adversarial and defense agents evolve over iterations, the framework also results in a strong attack model (adversarial agent). But we agree that the adversarial agent could, in theory, be misused to generate harmful attacks.\"}",
"{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**Q5: How much GPU memory was required to run the llama-2-7B experiments for alternative updating?**\\n\\n>For our experiments, we utilized 8 NVIDIA A100 GPUs, each with 80GB of memory. This setup provided sufficient memory to handle the computational demands of the alternative updating process in our experiments, ensuring smooth training and optimization of both the adversarial and defensive agents.\\n\\n**Q6: How many total iterations were performed during the experiments? Was the performance consistently improving throughout the iterations**\\n\\n>A total of 600 iterations were performed during our experiments. As the training progressed, the performance gradually improved and began to converge. Notably, the converged performance of our method outperformed the RLHF baseline, demonstrating that the iterative two-player game process of our approach leads to more powerful and aligned models compared to the standard RLHF method.\"}",
"{\"summary\": \"This paper introduces a novel framework that formulate the alignment problem as a two-player zero-sum game. This framework involves an adversarial agent and a defensive agent that iteratively interact to improve the LLM\\u2019s performance. The adversarial agent generates prompts to reveal weaknesses in the defensive agent\\u2019s responses, while the defensive agent seeks to adapt and strengthen its performance based on these prompts.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The main strength of this paper are:\\n1. The overall writing is well-organized and easy to follow, making the ideas presented clear and understandable. \\n2. The experimental results appear solid, especially in safety-related tasks. The proposed framework shows improvements compared to traditional RLHF methods, particularly in handling harmful inputs and jailbreak scenarios, which suggests that the approach is effective in these contexts.\", \"weaknesses\": \"1. Lack of Novelty and Insight. While the overall idea is well-executed, it seems relatively straightforward and lacks significant novelty. The two-player game framework, while effective in this context, feels more like an incremental improvement rather than a significant innovation.\\n2. Triviality of the Additional Diversity Reward. The additional diversity reward also feels somewhat trivial, as it is a common technique in multi-agent settings. It appears more as a practical trick rather than a meaningful contribution or innovation to the overall methodology.\\n3. Technical Flaw: The paper\\u2019s analysis relies on mirror descent, which guarantees convergence only for the average strategy[1]. However, the final round strategy tends to cycle around the Nash equilibrium rather than converge to it [1][2]. As a result, using only the final strategy in place of the average strategy is not theoretically justified in this context.\\n4. Computational Cost: The approach requires maintaining two policies for alternating updates, with each policy being optimized using PPO. This results in substantial storage and computational costs, particularly in the context of RLHF. Furthermore, as highlighted in the third point, the use of mirror descent mandates tracking the average policy over time, making it insufficient to rely solely on the final policy. Storing all the historical policies or learning an averagy policy further exacerbates the computational burden, complicating practical implementation at scale.\\n\\n[1] Mertikopoulos, P., Lecouat, B., Zenati, H., Foo, C. S., Chandrasekhar, V., & Piliouras, G. (2018). Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile. arXiv preprint arXiv:1807.02629.\\n[2] Perolat, J., Munos, R., Lespiau, J. B., Omidshafiei, S., Rowland, M., Ortega, P., ... & Tuyls, K. (2021, July). From poincar\\u00e9 recurrence to convergence in imperfect information games: Finding equilibrium via regularization. In International Conference on Machine Learning (pp. 8525-8535). PMLR.\", \"questions\": \"1. How much GPU memory was required to run the llama-2-7B experiments for alternative updating?\\n2. How many total iterations were performed during the experiments? Was the performance consistently improving throughout the iterations\\uff1f\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your valuable comments! Based on your feedback, we have included the details about SFT data in Appendix B3. **The changes are marked in blue.**\\n\\n**Q1: A pretrained Llama 2 7B model is used as a base, which then goes through SFT and RLHF. The data used for this isn't specified and it is unclear how the quality of the post-SFT model affects alignment. For example, Vicuna 7B has a score of 6.00 on MT-Bench, which is comparable to the score post GPO.**\\n\\n> The SFT dataset used in our model follows the approach of Vicuna. It consists of 53k user-shared conversations across various domains such as mathematics, knowledge querying, and coding, which are collected from ShareGPT.com. This dataset size is slightly smaller than the 70k dataset used in Vicuna. The reason for having only 53k data is that the full 70k dataset is not accessible as it is not open source. Subsequently, some individuals have crawled and cleaned a 53k dataset. Despite being smaller than the Vicuna dataset, this dataset size still offers strong generalization capabilities. We have provided more details on the Appendix B3.\\nWe want to further emphasize that **the primary focus of our approach is safety alignment, with MT-Bench serving as a tool to verify that safety enhancements do not substantially compromise utility**. Despite relying on a potentially less powerful SFT dataset, our two-agent game framework improves the model's safety while ensuring that utility either remains stable or exhibits a smaller improvement when stronger SFT models are utilized.\\n\\n**Q2: The paper largely focuses on safety alignment and it is not clear how much GPO would benefit general alignment.**\\n\\n>Our primary objective in this work is to validate the feasibility of the two-agent game framework for alignment. Safety, in this context, is particularly well-suited to evaluation via the reward model (RM) because the RM can robustly judge whether a model's response adheres to safety norms. The adversarial and defensive agents present a strong game-theoretic scenario where both progressively improve through iterations, ultimately producing a robust defense model and a strong attacker model. This effectiveness is particularly evident in OOD datasets, as shown by the experiments in Table 1 and Table 2. We thank the reviewer for highlighting this interesting direction, and we plan to explore its effectiveness in general alignment further in the future.\\n\\n**Q3: It is not clear how this method generalizes to larger models.**\\n\\n>Our GPO framework is designed to be model-agnostic, as its reliance on policy optimization and diversity rewards makes it scalable. Although empirical validation on larger models is limited due to computational constraints, our theoretical analysis and the framework's reliance on established reinforcement learning principles indicate that it should scale well. \\n\\n**Q4: The typical RLHF objective anchors to the initial reference policy. It is not clear why the GPO objective anchors to the policy from the previous step and how this affects this.**\\n\\n>Our framework is an iterative process consisting of two agents trained alternately, whereas typical RLHF involves only one agent trained once. In our approach, each iteration corresponds to a complete RLHF process. Consequently, the initial reference policy should be the policy at the start of each iteration\\u2014which is the policy from the previous step.\\n\\n**Q5: Given that the anchor is updated at every step, this would result in a larger policy shift for both the defensive and adversarial agents. How does the RM perform when the prompts generated by the adversarial agent is OOD?**\\n\\n>Our RM is based on Llama-Guard [1], a toxicity classifier that evaluates model outputs based on their toxicity levels. The task it performs is relatively straightforward, as it only requires scoring based on the model's output. Llama-Guard has been specifically designed to classify toxicity across a broad range of inputs, and its performance has been demonstrated to generalize well to OOD prompts. As shown in the Llama-Guard paper, the model exhibits strong OOD generalization, meaning it can effectively handle and score toxicity in previously unseen or out-of-distribution data. Thus, we expect the RM to continue performing well even when the prompts generated by the adversarial agent are OOD.\\n\\n>[1] Hakan Inan, Kartikeya Upasani, Jianfeng Chi et al. Llama guard: Llm-based input-output safeguard for human-ai conversations. arXiv preprint arXiv:2312.06674, 2023.\"}",
"{\"comment\": \"Thank you for your response. My concerns about the performance w.r.t to the training amount and ethics statement have been addressed, but the first concern is not fully solved. Please see the discussion below.\\n\\n**Results to validate the convergence to NE**\\n\\nI agree that the Nash gap in Eq (3.6) can be used to evaluate convergence to NE, but I do not find any figure or table to validate that the gap is (approximately) zero. The authors mentioned Fig. 2 (c) in their rebuttal, but the gap between two iterations is not close to zero in Fig. 2 (c), which does not serve as evidence to validate the convergence to NE. Therefore, the current manuscript still lacks results to support the claim of convergence to NE.\\n\\nMoreover, I think the gap between the harmful rates of two iterations may not be a good approximation of the Nash gap for two reasons. First, the attack and defense agents are only trained for fixed steps (200 and 400), which is not the best response in Eq (3.6). Second, the diversity term should be removed from the objective of the attack agent when training for the best response. As the authors mentioned Fig. 2 (c) as an evidence for convergence, the value J only considers the attack success rate (or harmful rate) without the diversity. Therefore, the best response should trained to optimize only for the attack success rate. In conclusion, I think a better way to get the approximate best response to the learned defense agent is to fix the defense agent and train an attack agent *using the base model to maximize the attack success rate without diversity term until convergence*. The approximate best response of the defense agent can be learned similarly.\"}",
"{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"Thank you to the reviewer for providing insightful comments.\\n\\n**Q1: Lack of Novelty and Insight. While the overall idea is well-executed, it seems relatively straightforward and lacks significant novelty. The two-player game framework, while effective in this context, feels more like an incremental improvement rather than a significant innovation.**\\n\\n>The core innovation of our approach lies in the dynamic, competitive interplay between an adversarial and a defensive agent, which allows for continual adaptation and refinement of both agents, leading to well-generalized aligned models, particularly on OOD datasets. Traditional alignment methods, such as RLHF, rely on static prompt datasets and human-driven instructions, leading to potential gaps in real-world coverage. In contrast, our two-player game framework dynamically generates and adapts prompts, uncovering and addressing weaknesses of LLMs that might otherwise go unnoticed. This iterative adversarial process, where the adversarial agent is continuously forced to generate novel challenges, provides an ongoing learning loop, which enhances generalization and robustness\\u2014a key challenge in the field of LLM alignment. Additionally, our theoretical proof of convergence to Nash Equilibrium is novel in its own right, adding mathematical rigor to the process of adversarial alignment.\\n\\n**Q2: Triviality of the Additional Diversity Reward. The additional diversity reward also feels somewhat trivial, as it is a common technique in multi-agent settings. It appears more as a practical trick rather than a meaningful contribution or innovation to the overall methodology.**\\n\\n>Regarding the additional diversity reward, while diversity constraints (like BLEU scores and sentence embeddings) may seem common in multi-agent settings, the specific role they play in our framework is crucial. The diversity reward is not just a technical addition; it serves to prevent the adversarial agent from exploiting a narrow set of prompts, ensuring that the generated prompts continuously push the boundaries of the defensive agent\\u2019s capabilities. This prevents premature convergence to an easily defeatable set of adversarial inputs, thus facilitating a more comprehensive training process. In the context of RLHF-based alignment, where prompt coverage can be limited, our method of dynamically generating diverse, challenging prompts enhances the alignment process by ensuring broad and adaptive coverage of edge cases, thereby strengthening model robustness.\\n\\n**Q3: Technical Flaw: The paper\\u2019s analysis relies on mirror descent, which guarantees convergence only for the average strategy[1]. However, the final round strategy tends to cycle around the Nash equilibrium rather than converge to it [1][2]. As a result, using only the final strategy in place of the average strategy is not theoretically justified in this context.**\\n\\n> In theory, we can select the best policy from all iterations to yield, which has a similar effect as average policy [1]. And in our experimental setting context, this is similar to selecting the final policy.\\n\\n>[1] Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF. Xie & Foster et al.\\n\\n**Q4: Computational Cost: The approach requires maintaining two policies for alternating updates, with each policy being optimized using PPO. This results in substantial storage and computational costs, particularly in the context of RLHF. Furthermore, as highlighted in the third point, the use of mirror descent mandates tracking the average policy over time, making it insufficient to rely solely on the final policy. Storing all the historical policies or learning an averagy policy further exacerbates the computational burden, complicating practical implementation at scale.**\\n\\n>**Computational Overhead Considerations**: While maintaining two policies requires additional computational resources, our approach carefully balances this overhead by leveraging **shared resources** between the two agents. Each agent\\u2019s optimization process can be managed through efficient use of the Proximal Policy Optimization (PPO) method, with the inclusion of KL regularization stabilizing the learning process. This ensures that updates are not computationally prohibitive, even in the context of large-scale implementations. \\n\\n>**Handling Mirror Descent and Averaging Policies**: The concern regarding mirror descent and policy averaging is addressed by leveraging efficient memory management techniques. While it is true that tracking historical policies can add to storage and computational costs, we have implemented strategies to minimize this burden. For example, instead of storing all historical policies, we use a dynamic policy averaging mechanism that efficiently tracks only relevant information needed for stable learning, thus reducing the overall memory footprint.\"}"
]
} |
E6B0bbMFbi | Verbalized Bayesian Persuasion | [
"Wenhao Li",
"Yue Lin",
"Hongyuan Zha",
"Baoxiang Wang"
] | The study of information design explores how an information designer can influence the optimal behavior of players to achieve a specific objective through the strategic selection of the information provided.
This paper focuses on a case, Bayesian Persuasion (BP), where the information designer holds an informational advantage over only one player.
While information design originates from everyday human communication, traditional game-theoretic or multi-agent reinforcement learning methods often model information structures as discrete or continuous scalars or vectors, this approach fails to capture the nuances of natural language, significantly limiting their applicability in real-world scenarios.
By leveraging the powerful language understanding and generation capabilities of large language models (LLMs), this paper proposes a verbalized BP framework that extends classic BP to real-world games involving human dialogues for the first time.
Specifically, we map the classic BP to a verbalized mediator-augmented game, where LLMs instantiate the information designer and receiver.
To efficiently solve the game in the language space, we transform agents' policy optimization into prompt optimization and propose a generalized equilibrium-finding algorithm with a convergence guarantee.
Numerical experiments in realistic dialogue scenarios, such as recommendation letters, courtroom interactions, and law enforcement, validate that the VBP framework can reproduce theoretical results in classic settings and discover effective persuasion strategies in more complex natural language and multistage settings. | [
"Large Language Models",
"Information Design",
"Bayesian Persuasion",
"Game Theory",
"Multiagent Systems"
] | Reject | https://openreview.net/pdf?id=E6B0bbMFbi | https://openreview.net/forum?id=E6B0bbMFbi | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zlcXIBjbD0",
"z1l2xmYX9h",
"vkSIaKDjD2",
"srC8DFLsb3",
"sL19nHn9Zv",
"m6kK6zmCSR",
"lWHO5Hg4YG",
"jR0mO9V3ot",
"jLr3PuqNww",
"hzt2IEDCto",
"hQPMgXNIDh",
"gVZlJzbA6S",
"b11Ybivy7g",
"ZhE51BYAUN",
"YVurf16V0A",
"WgTtSl0Ull",
"TlxZBI6mlA",
"TUcP0zRf6w",
"SWEv23zPRV",
"QoK0JUJQcX",
"Q35CS0tf5V",
"PX8P55Rkii",
"OgWPUglmqx",
"OVXHpHX4X2",
"NWgcnufFdZ",
"J72dDY437b",
"FZhrvXylM3",
"DhdInEBfm4",
"CUJWx6VnLu",
"COwjw4mjGR",
"BCbqbE2doR",
"B4YINAWGfw",
"7sOjvwdBQP",
"7ULz8FjcIf",
"6vMqMhulNu",
"4mfZMghP0g",
"4NVKbIlpjw",
"1NRhAGd4hX",
"13XMItUMQK",
"0TZHcHfw7h",
"0LxwkFp0KM",
"0ElaWV41pD"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1731827020444,
1732597105313,
1731827093012,
1737523910042,
1731995539236,
1731995560414,
1731995316034,
1731928615968,
1731826857718,
1730672717954,
1731995457069,
1730661142150,
1731928385758,
1731935650152,
1730721818812,
1732621479723,
1731928742587,
1732680049421,
1731826898401,
1732618629826,
1731935618555,
1730459220871,
1731928471136,
1731928452479,
1731826999100,
1731995632087,
1731928413262,
1731995364552,
1731826926301,
1731826964882,
1734779452331,
1732719531791,
1731995609135,
1731837603414,
1731929782009,
1731995491605,
1732120502727,
1731928438255,
1731995340405,
1732595578223,
1731995385437,
1731995583800
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Reviewer_PcFV"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Reviewer_tm2K"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Reviewer_gJz3"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Reviewer_PcFV"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Reviewer_NKjM"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Area_Chair_okBp"
],
[
"ICLR.cc/2025/Conference/Submission8456/Reviewer_NKjM"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Reviewer_PcFV"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Reviewer_PcFV"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
],
[
"ICLR.cc/2025/Conference/Submission8456/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"> Q6: **Elaboration on Figure 7**: Figure 7 discusses variations in prompts, but the information presented is unclear, and the analysis feels vague. Could you elaborate further on this?\\n\\nThank you for your question regarding Figure 7. We understand that the information presented may have seemed vague, and we appreciate the opportunity to elaborate on the details. Below is a more thorough explanation based on the key elements of our framework and experiment.\\n\\n**Explanation of Figure 7:**\\n\\n1. **Figure 7 shows the evolution of strategies (prompts) in three classic BP problems under the S2 setting**:\\n - In Figure 7, we track how the strategies (prompts) evolve over iterations of the **PSRO (Policy Space Response Oracle)** framework. Specifically, we use **OPRO** as the best response oracle. The figure visualizes how the prompts change as the PSRO framework iteratively improves the sender and receiver strategies.\\n\\n2. **Maintenance of a Strategy Pool**:\\n - The PSRO framework maintains a **strategy pool** for both the sender and receiver. This pool contains different strategies (prompts) that have been generated throughout the iterations. The actual strategy the sender or receiver executes is a **mixed strategy**\\u2014a weighted combination of strategies from this pool.\\n - Figure 7 displays the **top 10 strategies (prompts)** with the highest selection probabilities in the final strategy pool. This helps illustrate which prompts will most likely be chosen after optimization.\\n\\n3. **Hierarchical Prompt Optimization**:\\n - In our experiments, the optimization of prompts follows a **hierarchical process**. First, **OPRO** optimizes the **type** or **category** of the prompt (e.g., the general structure of the message). After determining the type, OPRO then optimizes the **specific content** of the prompt within that category.\\n - In Figure 7, this hierarchical process is reflected in the **first two columns** of each table. The first column represents the optimized category of the prompt, and the second column shows the specific content optimized within that category.\\n\\n4. **Highest Probability Strategies**:\\n - The **third and fourth columns** of each table display the selection probabilities for the top 10 strategies (prompts) that emerged after PSRO converged. These probabilities indicate the likelihood of each specific prompt being chosen from the pool after optimization.\\n\\n5. **Change in Selection Probabilities Over Iterations**:\\n - The **fifth column** shows how the probability of each strategy (prompt) being selected changes over the iterations of the PSRO framework. This helps illustrate the **evolution of the strategy pool** as the sender and receiver adapt and refine their strategies through multiple iterations.\\n\\n**Rebuttal Summary:**\\n\\nIn summary, Figure 7 provides a detailed view of how the strategies (prompts) evolve over time in our experiments using the PSRO framework with OPRO as the best response oracle. The figure captures the top 10 strategies with the highest selection probabilities, showing both the hierarchical optimization of prompt categories and content and how the selection probabilities of these strategies change over time. This evolution reflects the adaptation of the sender and receiver as they optimize their strategies within the verbalized Bayesian persuasion framework. \\n\\nWe will further clarify these points in the revised version of the paper to make the analysis more accessible and ensure the relationship between the table columns and the prompt optimization process is clearer.\"}",
"{\"title\": \"Summary of Revisions in the Revised Version\", \"comment\": \"**Dear Reviewers,**\\n\\nWe are deeply grateful to all reviewers for their insightful and constructive feedback. Your comments and suggestions have been invaluable in improving our work's quality, clarity, and depth. We are pleased to inform you that we have uploaded the **revised version** of our paper, where we have carefully addressed the points raised during the review process. Below, we highlight the major revisions made:\\n\\n1. **Improved Organization of the Main Text**:\\n - We have reorganized the content by merging **Section 2.1 (Bayesian Persuasion)** and **Section 2.2 (Modeling BP as a Mediator-Augmented Game)** into a single, streamlined **Problem Formulation** section.\\n - **Section 2.4 (Classic BP Problems)** has been moved to the experimental section to improve the flow of the paper.\\n2. **Incorporation of Related Work**:\\n - We have supplemented the discussion with the work of Bai et al. in the revised version (added after **Appendix A.3**).\\n3. **Enhanced Explanations and Discussions**:\\n - Additional analysis and discussion on Figure 7 have been provided in **Appendix G.1.2**.\\n - A subsection on real-world applications has been included in **Appendix D.1** to better contextualize our work.\\n - A more detailed discussion on unaligned LLMs has been added in **Appendix G.1.3**, along with expanded insights on the S3 setting in **Appendix G.1.1**.\\n - A new discussion on obedience constraints has been added to **Appendix F.2** to clarify this key aspect and address potential concerns.\\n4. **Technical Additions and Future Work**:\\n - We have included pseudocode in **Appendix B** for better clarity and implementation guidance.\\n - A discussion on the Price of Anarchy (PoA) has been added to **Appendix H** for future work directions.\\n5. **Showcasing LLM Prompts**:\\n - Due to space constraints in the main text, we have moved the LLM prompt demonstrations to **Appendix F.4**, with appropriate references added in the main text.\\n\\nAll revisions have been clearly marked with **blue highlights** in the revised version to facilitate your review. Additionally, we have corrected typos and minor errors identified during the review process.\\n\\nWe truly appreciate your dedicated time and effort in reviewing our submission. Your thoughtful feedback has been instrumental in helping us refine and improve our work. Thank you again for your invaluable contributions.\\n\\nBest regards,\\n\\nOn behalf of all authors\"}",
"{\"comment\": \"> Q7: **Real-world Applicability**: Can your proposed method be applied to broader, real-world scenarios or other potential applications? If so, please briefly describe how it might be applied and any potential challenges.\\n\\nThank you for your question regarding the real-world applicability of our proposed verbalized Bayesian persuasion (VBP) framework. We appreciate the opportunity to further elaborate on how our method can be applied to broader, real-world scenarios, particularly focusing on the two examples you mentioned, and to discuss the potential challenges in greater detail.\\n\\n**Generalizability to Multi-Sender, Multi-Receiver, and Multi-Round Tasks**: Since the VBP framework models Bayesian persuasion as an **extensive-form game** and uses large language models (LLMs) for decision-making and strategy optimization, it is theoretically extensible to more complex, real-world tasks involving **multiple senders**, **multiple receivers**, and **multi-round interactions**. This generalization opens the door to solving a wide range of real-world problems where multiple actors participate in strategic communication over several rounds, making it relevant for real-time decision-making and long-term strategic planning.\\n\\n**Example 1: Conversational Recommendation Systems**\\n\\nOne significant real-world application is in **conversational recommendation systems**, particularly in the context of live-stream shopping. This scenario involves multiple senders (e.g., influencers or sales agents) trying to persuade a potentially large and diverse group of receivers (customers) to purchase products during a live-stream session. The dynamic interaction, with real-time communication between senders and receivers, makes it a perfect fit for multi-sender, multi-receiver, and multi-round BP problems.\\n\\n - **How VBP Can Be Applied**: In this setting, each sender (influencer or salesperson) can be modeled as an agent who strategically chooses how to present information about a product to maximize customer engagement and conversions. The receivers (customers) are individuals with potentially different preferences, beliefs, and levels of trust in the senders. The VBP framework can optimize the prompts (e.g., how product information is conveyed or how offers are phrased) to maximize the likelihood of purchasing across various customer segments. \\n\\n - **Potential Challenges**: A challenge in this scenario is the **heterogeneity of receivers**\\u2014each customer may interpret the signals differently based on their preferences, making it difficult to design a one-size-fits-all strategy. Additionally, the **real-time nature** of live-stream shopping requires highly efficient decision-making algorithms, as senders need to adapt their communication strategies on the fly. Scaling this to handle thousands or millions of receivers in real-time would require efficient parallel processing and optimization techniques.\\n\\n**Example 2: DRG Strategy in Healthcare**\\n\\nAnother important real-world application is in healthcare, particularly in the context of the **Diagnosis-Related Group (DRG) strategy**. DRG systems are used by governments and healthcare providers to categorize hospital cases for the purpose of determining reimbursement rates. In such a system, the **regulator** (e.g., a government agency) acts as the **receiver**, while **hospitals and post-acute care (PAC) providers** act as the **senders** who have an informational advantage regarding patient conditions, treatment options, and costs.\\n\\n - **How VBP Can Be Applied**: In this case, the senders (hospitals and PAC providers) have more detailed information about the patient's condition and treatment needs, while the government (receiver) needs to design a reimbursement policy that discourages unnecessary or overly expensive treatments. The VBP framework can be used to model the incentives and communication strategies of hospitals and PAC providers as they present information to the government. The goal would be to optimize the policy to encourage cost-effective treatments while ensuring patient care is not compromised.\\n\\n - **Potential Challenges**: A key challenge here is the potential for **conflicting incentives** among the senders. This introduces a layer of complexity in the multi-sender BP problem, as senders might compete or collaborate to influence the receiver's decision. Additionally, the **scale of the problem**\\u2014with potentially thousands of hospitals and providers\\u2014requires the VBP framework to handle large-scale optimization efficiently. Moreover, the long-term nature of updating policies based on feedback introduces challenges related to multi-round interactions.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"> **Q6: Clarification on the necessity of the obedience constraint** *\\\"I have trouble understanding why the obedience constraint is used in this paper. As far as I can understand, one can simplify the BP game by assuming the sender just recommends the best possible action from the receiver's perspective, and then the problem becomes just choosing the best action for the sender to recommend, under the constraint that it must be optimal from the receiver's perspective (this assumes the receiver knows the sender's policy, which is the commitment constraint). Is this understanding correct? It seems that in this case, using the obedience constraint simplifies the game so much that one could have the LLM implement a simple (prompted) policy of either recommending and not recommending, and the obedience constraint makes sure this finds the right equilibrium. If the goal is to have a more realistic game where the text of the reference letter actually matters, then what does the obedience constraint do here? I might be misunderstanding something.\\\"*\\n\\nThank you for raising this important question regarding the necessity of the obedience constraint in our framework. We realize that we did not provide enough detail in the paper to fully explain this aspect, and we appreciate the opportunity to clarify it here.\\n\\n1. **Realistic Scenarios Beyond Simple Recommendation**\\n First, we agree that the sender could recommend the best action from the receiver\\u2019s perspective in a simplified version of the Bayesian persuasion game. However, this approach does not reflect the complexity of real-world recommendation scenarios, such as writing reference letters. In practice, a sender (e.g., a reference letter writer) does not just provide a binary signal (recommend or not recommend). Instead, the sender communicates more nuanced information through natural language, which might imply various levels of recommendation strength or provide additional context for the receiver to interpret.\\n\\n2. **Extended Obedience Constraints**\\n To better capture this reality, we do not directly use the standard obedience constraint described in Equation (1). Instead, we implement the **extended obedience constraints** proposed by **Lin et al. (2023)** (discussed in Section 4.3 and Equation 4 of their work). This extension is crucial because it removes the strict **revelation principle** analysis from the obedience constraint, allowing the sender\\u2019s role to shift from \\u201caction recommending\\u201d to \\u201csignal sending.\\u201d\\n\\n In other words, the sender no longer has to map a signal to a single recommended action. Instead, the sender can use **natural language signals** that may contain redundant or implicit information, leaving more room for nuanced communication, as is common in real-world settings. This shift is crucial for modeling verbalized Bayesian persuasion problems since it allows for richer, more realistic signal spaces.\\n\\n3. **Redundancy and Natural Language**\\n Introducing **redundancy** in the signaling scheme allows for more sender communication flexibility. In the strict obedience constraint framework, a signal must map one-to-one with a specific recommended action. However, with the extended obedience constraints, the sender can now map multiple signals to the same action distribution, enabling more nuanced messaging through natural language. This redundancy is similar to what is used in other learning algorithms, where increasing the capacity of a model (e.g., enlarging a neural network) allows for better encoding and representation of complex mappings.\\n\\n This flexibility is essential for real-world persuasion problems, where the sender might not always explicitly recommend a specific action but instead provide signals that leave room for interpretation by the receiver. For instance, in a reference letter, subtle language choices can imply varying degrees of recommendation without explicitly stating a binary decision.\\n\\n4. **Why the Obedience Constraint is Still Necessary**\\n The obedience constraint in its extended form is still necessary to ensure that the sender's signals are credible and aligned with the receiver's best interests. Without some form of obedience constraint, the sender could send misleading signals that would ultimately reduce the effectiveness of the persuasion process. The extended obedience constraint balances realistic communication and strategic alignment in the game by allowing for nuanced and redundant signals while maintaining credibility.\\n\\nIn summary, the **extended obedience constraint** allows for more realistic and flexible communication in verbalized Bayesian persuasion problems, accommodating the complexity of natural language while ensuring that the sender\\u2019s signals remain credible. This approach moves beyond a simple recommendation model and better reflects real-world scenarios.\\n\\n[**Lin et al. (2023)**] Lin, Yue, et al. \\\"Information design in multi-agent reinforcement learning.\\\" NeurIPS 2023.\"}",
"{\"comment\": \"> **Q7: Clarification on the takeaway from S2 results** *\\\"In S1, the authors include a reward for the LLMs to give clearer signals. It seems that this basically is an ablation that forces the game back into a simple 'yes/no' action space. It seems that the results here are similar to the S2 case where this reward isn't used. I am not sure this is a good or a bad sign\\u2014what is the takeaway from the S2 results? Is there anything going on under the hood that goes beyond a simple binary signal? (In a way that would be relevant to the game/optimization/etc.)?\\\"*\\n\\nThank you for your question regarding the results from scenario S2 and how they relate to the findings from S1. We appreciate your attention to the differences between these two setups and their implications for the effectiveness of our approach.\\n\\n1. **Qualitative vs. Quantitative Analysis**\\n First, you're absolutely right to point out that in **S1**, we introduce a reward for clearer signals, which can simplify the action space into a more binary-like structure. This setup allows us to compare our algorithm's results directly with those of classical solvers, providing a **quantitative benchmark** for validating the effectiveness of our method. However, in **S2** and **S3**, we do not have such a reward structure, meaning that the results are less directly comparable to classical solvers. Instead, we conduct a more **qualitative analysis** of the strategies generated by the LLMs in these settings.\\n2. **S2 Results Reflect the Nature of Binary Signaling in Game Theory**\\n As you mentioned\\u2014and as we previously discussed\\u2014**from a game theory perspective, neither the sender nor the receiver gains much from using more than a binary signal or policy**. This is a well-known feature of Bayesian persuasion games, where the optimal strategy often reduces to a binary signal. Given this, the **similarity between the results in S1 and S2** is not necessarily a bad sign. On the contrary, it reinforces that our **VBP framework** effectively captures the optimal signaling behavior, even when we remove the explicit reward for clearer signals.\\n3. **Takeaway from S2 Results**\\n The key takeaway from the S2 results is that **VBP performs consistently across different settings**, even when the environment becomes more complex and we remove the \\\"clear signal\\\" incentive. The fact that the results in S2 still align with those in S1 demonstrates the robustness of our approach. It shows that the LLM, when guided by the VBP framework, naturally converges to strategies resembling binary signaling, which is theoretically optimal for our game structure. This consistency across different setups highlights the **effectiveness and reliability** of VBP in solving Bayesian persuasion problems, whether or not explicit signals are enforced.\\n4. **Beyond Binary Signals: Qualitative Observations**\\n While the results in S2 may seem to echo the binary nature of S1, there are still **subtle, qualitative differences** in how signals are constructed without a clear signaling reward. In S2, the LLM has more freedom to explore alternative strategies. Although it converges towards binary-like outcomes, the **path** to that convergence may involve more nuanced, multi-step reasoning or signaling, which is not immediately apparent in a purely quantitative comparison. This suggests that the LLM could explore richer communication strategies under the hood, even if the final output appears binary.\\n5. **Conclusion: Validating the Effectiveness of VBP**\\n In summary, the similarity between the results of S1 and S2 is a **positive indication** that our VBP framework effectively guides the LLM toward optimal signaling strategies. The results in S2, despite the lack of explicit rewards for clear signaling, still align with the theoretical expectations of a binary signaling game, validating the robustness of the approach. We believe this consistency across different settings underscores the VBP framework's practical utility for real-world Bayesian persuasion applications.\\n\\nThank you again for your thoughtful question, and we hope this clarifies the key takeaway from the S2 results.\"}",
"{\"comment\": \"We sincerely thank the reviewer for their insightful feedback and thoughtful questions. We greatly appreciate the opportunity to clarify our work and provide further details regarding the methodology and its implications. In the following sections, we will address each specific question raised by the reviewer, offering detailed explanations and elaborating on the key aspects of our approach.\\n\\n---\\n\\n> **Q1: Clarification on the presentation complexity and excessive machinery** *\\\"I found the paper somewhat hard to follow. The paper uses a lot of machinery to define optimization problems and solve them, but I didn't always understand exactly what was going on on the most basic LLM level. I think more simplicity would be great with this sort of research.\\\"*\\n\\nThank you for your feedback and for pointing out that the paper might be hard to follow due to the complexity of the machinery used. We acknowledge that the nature of this work, which involves integrating large language models (LLMs) into a game-theoretic framework, introduces multiple layers of optimization and interaction that may seem complex at first glance. However, we hope to clarify the pipeline and the role of the LLMs in our approach.\\n\\n**Overview of the Algorithm Pipeline**\\nThe overall algorithm pipeline is detailed in our response to Reviewer gJz3's **Q3: Vagueness in Method Description**, where we provided a step-by-step breakdown of the process. In summary, the pipeline operates in two main stages:\\n\\n- **Stage 1: LLMs as Decision Makers**\\n In this stage, the LLMs are used directly as decision-makers within the game. Specifically, one LLM acts as the **sender** and the other as the **receiver**. The **sender** receives a prompt and outputs a signal, while the **receiver** processes the signal and outputs an action. Both LLMs perform their respective roles based on the prompts, which form the strategies in the Bayesian persuasion game.\\n- **Stage 2: LLMs as Prompt Optimizers**\\n The second stage involves optimizing the prompts given to the sender and receiver LLMs. Instead of updating the model weights (in-weight updates), we focus on **in-context learning** by adjusting the prompts that guide the LLMs\\u2019 outputs. This prompt optimization is the core of our work and is executed using two frameworks: **OPRO** and **FunSearch**. These frameworks are designed to efficiently explore the prompt space and identify prompts that lead to desirable behaviors from the LLMs within the game.\\n\\nWe will provide the algorithm's pseudocode in the revised version and highlight the LLM part.\"}",
"{\"comment\": \"> **Q6: Typo in Equation: L180:** *'... following maximisation problem ...' should it be* $a*$ *instead of* $s$ *?*\\n\\nThank you for pointing out the potential confusion regarding the maximization problem and the use of $s$ in Line 180. We appreciate your careful reading of the text.\\n\\nTo clarify, in Lines 178-179, we define $a^*$ as the result of the maximization of $\\\\mathbb{E}\\\\_{\\\\omega \\\\sim \\\\mu\\\\_{\\\\pi}(\\\\mid s)} u\\\\_1(a, \\\\omega)$, where $a^*$ is indeed a function of $s$. In other words, $a^*$ is the optimal action selected based on the state $s$.\\n\\nWe apologize for not clearly emphasizing this dependency in the original text, which may have caused confusion. The use of $s$ in Line 180 is intentional, as it refers to the state that influences the maximization process resulting in $a^*$. However, we understand how this could have led to a misunderstanding, and we will revise the explanation to make the relationship between $s$ and $a^*$ more explicit.\\n\\n**Revision Plan**:\\n\\nIn the revised version, we will add a clarification to ensure that readers understand $a^*$ is a function of $s$ and that this dependency is central to the maximization problem. This should resolve any ambiguity and make the notation more transparent.\\n\\nThank you again for your detailed feedback!\\n\\n---\\n\\n> **Q7: Reference to Empirical Game-Theoretic Analysis (EGTA):** *L235: focusing on a strategically relevant subset of strategies is comprehensively discussed in EGTA [1] and would be worth referring to here?*\\n\\nThank you for the suggestion regarding the inclusion of a reference to Empirical Game-Theoretic Analysis (EGTA) in Line 235. We agree that EGTA, particularly its focus on strategically relevant subsets of strategies, is highly relevant to the discussion in this section.\\n\\nIn the revised version of the paper, we will ensure that a citation to the work on EGTA is included, specifically referencing **\\\"Methods for Empirical Game-Theoretic Analysis\\\"** [1]. This work provides valuable insights into how subsets of strategies can be identified and analyzed within a game, which aligns well with our approach of focusing on strategically relevant prompts in our VBP framework.\\n\\nWe appreciate your recommendation and will incorporate this reference to strengthen the connection between our methodology and existing work in the field.\\n\\n**Revision Plan**:\\n\\nIn the updated manuscript, we will:\\n\\n- Cite the paper **\\\"Methods for Empirical Game-Theoretic Analysis\\\"** as suggested.\\n- Acknowledge the importance of focusing on strategically relevant subsets of strategies, as discussed in EGTA, in relation to our approach.\\n\\nThank you again for your insightful feedback.\\n\\n[1] https://aaai.org/papers/01552-aaai06-248-methods-for-empirical-game-theoretic-analysis/\\n\\n---\\n\\n> **Q8: Clarification of \\\"Static\\\" in L341:** *What does 'Static' refers to here?*\\n\\nThank you for your question regarding the term \\\"static\\\" in Line 341. In this context, \\\"static\\\" is used in contrast to the **multi-stage setting** of S3. Specifically, \\\"static\\\" refers to scenarios where there are **no state transitions**\\u2014meaning the environment or system remains fixed throughout the interaction, and the game does not evolve over multiple stages.\\n\\nWe will clarify this in the revised version to avoid any ambiguity. The term \\\"static\\\" here simply indicates that the game setup does not involve state changes, unlike the more complex, multi-stage structure of S3.\\n\\n**Revision Plan**:\\n\\nIn the revised version, we will explicitly state that \\\"static\\\" refers to the absence of state transitions, highlighting the distinction between static and multi-stage scenarios like S3.\\n\\nThank you for pointing this out, and we hope this clarification helps!\\n\\n---\\n\\n> **Q9: Clarification of Reasoning Regarding the Mediator:** *L370: 'the reason we can leverage ...', as mentioned in the Weaknesses section I could not follow the reasoning here. It would be great if the authors could clarify.*\", \"please_refer_to_q1\": \"Clarification of Mapping to Mediator-Augmented Games**.\\n\\n---\\n\\n> **Q10: Expectation for Always Honest Strategy:** *Figure 4: For the honest probability, it appears that all methods have converged to the 'always honest' strategy in all 3 settings. Is that expected? I would have thought that if the professor always recommends truthfully, the recruiter would be best served to always trust the recommendation at which point the professor could profitably deviate?*\", \"please_refer_to_q2\": \"Honest Probability in Chart (d)** in the rebuttal for **Reviewer NKjM**.\"}",
"{\"comment\": \"We sincerely thank the reviewer for their insightful feedback and thoughtful questions. We greatly appreciate the opportunity to clarify our work and provide further details regarding the methodology and its implications. In the following sections, we will address each specific question raised by the reviewer, offering detailed explanations and elaborating on the key aspects of our approach.\\n\\n---\\n\\n> Q1: **Clarification on the Benefits of a Verbalized Approach**: The paper does not clearly articulate the benefits of a verbalized Bayesian persuasion approach. Can you clarify what advantages this verbalized approach provides over existing methods?\\n\\nThank you for your insightful question regarding the benefits of a verbalized Bayesian persuasion (BP) approach. The core advantage of our approach stems from its ability to transcend the abstractions typically imposed by traditional BP models, which often reduce complex real-world decisions to oversimplified, low-dimensional action and information spaces.\\n\\nIn classic BP settings, the utility functions are typically solved analytically, and the problem is reduced to finding an optimal Bayes-correlated equilibrium. However, these methods often rely on restrictive assumptions, such as binary information spaces or discrete action sets, which fail to capture the richness and nuance of many real-world applications. For instance, in the recommendation letter problem, traditional BP models reduce the student\\u2019s quality to a binary classification (e.g., weak or strong), and the professor\\u2019s actions to recommend or not. This oversimplification strips away much of the meaningful information inherent in the task.\\n\\nBy leveraging large language models (LLMs) within our framework, we aim to directly address these limitations by operating within the natural language domain. This allows us to represent more nuanced informational structures and action spaces closer to how persuasion occurs in real-world scenarios. Specifically, LLMs enable us to model complex verbalized interactions where persuasion strategies are not limited to predefined categories but are expressed through natural language, capturing subtleties like tone, context, and implied meanings.\\n\\nThus, the primary benefit of our verbalized approach is its potential to handle richer, more realistic persuasion tasks that are difficult to model using traditional BP methods. This opens the door to broader real-world applications where simplifications like binary choices are inadequate, allowing for more sophisticated and effective persuasive communication strategies.\"}",
"{\"summary\": \"The paper proposes verbalized Baysian Persuasion as a generalisation to Bayesian Persuation leveraging the capabilities of LLMs in facilitating persuasion scenarios in natural languages directly. The paper argues for prompt optimisation instead of policy optimisation for scalability, and arguably derives convergence guarantees from a variation to the mediator-augmented game.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The Bayesian Persuasion setting could be an interesting direction to explore given the generalised linguistic capabilities of LLMs. The authors propose to map verbalised persuasion within the game-theoretic framework of Bayesian persuasion which is novel and timely. The authors then described several intuitive real-world scenarios that involve mixed-motive persuasion from stakeholders, exemplifying the target problem setting.\", \"weaknesses\": \"My main concern with accepting this paper is that it's unclear what are the main contributions of the method, which, according to Figure 3, includes a) mapping VBP to the framework of mediator-augmented games from which it derives its convergence guarantees b) a set of solvers including OPRO algorithm, FunSearch algorithm and Prompt-Space Response Oracle algorithms.\\n\\nRegarding a), I don't see why and how the VBP setting can be mapped onto mediated-augmented games of Zhang et al 2024, where in a mixed-motive game with $n$ players, the game transform introduces a fictitious mediator player whose objective is to maximise an optimality objective while maintaining an equilibrium of the game. This is distinctly different from the authors proposed mapping where the mediator is played by the sender player, with the receiver the only other player. Among which players is the mediator player mediating between? Zhang et al 2024 also proposes a specific mediator player utility function from which convergence guarantee is derived. If I understood correctly the authors proposes a mapping where the sender acts as the mediator but retains its original utility function. Overall, I find a) tenuous and confusing. It would great if this can be clarified in rebuttal. \\n\\nRegarding b), there are, several methods that have been described here and it's not clear which ones are critical elements of the VBP framework. Among these, PSRO provides convergence guarantee (in a specific sense), yet the writing and Figure 3 would suggest that the convergence guarantee comes from the mediator-augmented game formulation. Overall, I would have appreciated a more succinct description of the framework with its necessary components instead of a juxtaposition of several rather sophisticated methods whose necessities in the framework remain unclear.\", \"questions\": \"1. L38-40: \\\"shaping the behaviours of others ... achieve this through either mechanism or information design\\\". I find this unclear or overly assertive. How each player's actions shape those of others is the entire focus of game theory yet this opening statement makes it sound like co-player behaviour shaping can only occur with modified rewards or observations. You would not deterministically play rock because you know I could exploit by always playing paper, would that count shaping the behavior of co-players?\\n2. L46: \\\"Notably, the designer must ... that influence state transition\\\", this is difficult to follow. Perhaps worth rephrasing?\\n3. L118-L130: PSRO in the limit converges to a Nash equilibrium out of many. In mixed-motive games, NE needs not be unique and solutions are not interchangeable. Perhaps this could be a relevant point of discussion especially since VBP seems to be primarily dealing with mixed-motive games? \\n4. L180: \\\"... following maximisation problem ... $\\\\E_{s~\\\\pi(\\\\omega)}$\\\" should it be $a*$ instead of $s$? \\n5. L235: focusing on a strategically relevant subset of strategies is comprehensively discussed in EGTA [1] and would be worth referring to here? \\n6. L341: what does \\\"Static\\\" refers to here? \\n7. L370: \\\"the reason we can leverage ...\\\", as mentioned in the Weaknesses section I could not follow the reasoning here. It would be great if the authors could clarify. \\n8. Figure 4: For the honest probability, it appears that all methods have converged to the \\\"always honest\\\" strategy in all 3 settings. Is that expected? I would have thought that if the professor always recommends truthfully, the recruiter would be best served to always trust the recommendation at which point the professor could profitably deviate? \\n9. L447: the authors suggested that the pattern of honesty rising, falling and then rising again validates the hypothesis that this is due to the use of aligned LLM. I would have thought that a simpler explanation is that the professor and the recruiter are simply in a strategic cycle? Would that not be a reasonable explanation to this phenomenon? \\n10. L483: \\\"... at most the top 10 strategies with the highest probabilities\\\", is this for computational reasons? Pruning actions by their support at an restricted game equilibrium could be problematic in general. \\n11. Figure 7: column \\\"Converged prob\\\" and the next column are redundant and could be consolidated to create space for larger fonts that are more readable? How are the probabilities computed? Are they the average probability of taking over the pool of policies in the PSRO population? \\n\\n[1] https://aaai.org/papers/01552-aaai06-248-methods-for-empirical-game-theoretic-analysis/\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"> **Q5: Clarification on the novelty of LLM deception strategies** *\\\"It might be that the optimization performed in the paper actually discovers interesting LLM behaviors and strategies, but this is hard to tell for me. I think I can see how the paper uncovers interesting behaviors within the setting studied here, i.e. when optimizing prompts, it's interesting that some amount of lying/deceiving gets reinforced, and that this game setup works in a sense and finds something like an equilibrium. But I haven't been convinced that this specific setup is interesting enough to study on its own\\u2014it seems too artificial to me to add a lot beyond either (i) the existing toy game theory setting on one hand, or (ii) just studying persuasion directly by prompting LLMs to write lying/deceptive/persuasive etc. texts.\\\"*\\n\\nThank you for your insightful feedback and for raising concerns about the novelty and interest of the deception strategies that our work uncovers. We understand that the reinforcement of deceptive or persuasive strategies during prompt optimization could appear to be a natural outcome of the game-theoretic setting, and we would like to clarify both the motivation and the novel contributions of our approach.\\n\\n1. **The Core Objective: A Game-Theoretic Solver for Verbalized BP**\\n The primary goal of our work is not solely to study LLM deception or persuasion strategies in isolation but rather to **design a game-theoretic solver** for **verbalized Bayesian persuasion (BP) problems**. This goes beyond just examining the propensity of LLMs to lie or deceive. To achieve this, we use the **prompt-space response oracle (PSRO)** framework, which allows us to integrate LLMs into a structured game-theoretic environment. Two key components enhance this framework:\\n\\n - **OPRO (Optimized Prompt Response Oracle)**: A best-response oracle that optimizes the sender's strategy through prompt engineering.\\n - **FunSearch**: A complementary framework that refines the prompt search for the receiver, ensuring that the strategies discovered are aligned with the theoretical objectives of the game.\\n\\n These components are not just \\\"extra\\\" modules added for complexity\\u2014they are essential for ensuring that the game solver achieves important theoretical properties such as **convergence** and **solution optimality**. Without these tools, we would not be able to rigorously analyze or guarantee the behaviors emerging from LLM interactions in the game setting.\\n\\n2. **Novelty in Theoretical Framework, Not Just Behavior**\\n While it might appear that the LLMs are simply exhibiting behaviors like lying or deception, the **novelty** of our work lies in the **game-theoretic framework and optimization techniques** we use to discover and analyze these behaviors. The LLMs are not just being prompted to generate deceptive or persuasive text; their interactions are embedded within a formal BP framework where we can rigorously study **how** and **why** certain strategies emerge. This is a significant departure from simply prompting LLMs to write persuasive or deceptive text in isolation.\\n\\n The **equilibrium** that we find through our game setup reflects theoretically grounded strategies that have been optimized and analyzed within a game-theoretic context. This is a key distinction from simply conducting case studies on LLM deception. Our setup allows us to explore how LLMs might behave in **strategic interaction environments** where deception or lying could naturally arise as part of the optimal solution, rather than as an ad-hoc behavior.\"}",
"{\"summary\": [\"The paper studies using LLMs in a Bayesian persuasion setting, which is a game between two players. One of the players (the sender) has access to some private information, and tries to influence the other player's (the receiver) actions by sharing specific information with them. The other player tries to use the shared information to achieve their own goals.\", \"The new aspect this paper introduces is that they use LLMs for both the sender and the receiver. They optimize the LLM agents' actions in the game by optimizing a distribution over a space of prompts. For instance, in a recommendation letter setting, the prompt specifies specific aspects of the letter such as whether or not to omit a weakness of the candidate.\", \"The authors reproduce theoretical results from the classic BP setting experimentally, and also expand the setting to multi-turn interactions. They extend the prompt-space response oracle to multi-turn interactions using conditional prompt optimization.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Persuasion in LLMs seems like an important topic given that LLMs will in fact increasingly be used for tasks such as writing recommendation letters.\", \"It is interesting to make the BP framework more realistic by studying actual written text rather than simple yes/no messages.\", \"It is a great idea to optimize prompts to study this setting, which is less involved than e.g. trying to do RL directly on the LLMs\", \"The paper includes comprehensive experiments and evaluations, including detailed ablations and examples in the appendix.\", \"I found it useful to see how strategies developed over training in Figure 7, specifically that more relevant categories ended up being selected more often.\"], \"weaknesses\": [\"I found the paper somewhat hard to follow. The paper uses a lot of machinery to define optimization problems and solve them, but I didn't always understand exactly what was going on on the most basic LLM level. I think more simplicity would be great with this sort of research.\", \"In general, I would prefer there to be less preliminaries and to get to the results faster. I wonder whether one could simplify some of the discussion of preliminaries to the parts that matter for the paper, though I'm not sure.\", \"It seems that in the end the way the game is setup, it doesn't really matter, for instance, whether the rec letters are actually written eloquently or not. I might be missing something, but it feels like somehow the simple BP games are not really the right testing ground for studying LLM persuasion, because from a game theory perspective, neither the sender nor receiver gain anything by using more than a binary signal/policy.\", \"As far as I can tell, the paper gives examples in the appendix, but I couldn't find any full end-to-end transcripts from the games.\", \"Given that the paper uses many bespoke algorithms to solve different aspects of the setting, I think this won't be that useful in practice. E.g., I think it's unlikely any of these will be useful for training better LLMs. If the goal is more to study propensities of current LLMs and to find out something about persuasion with LLMs, I am not sure what exactly the takeaway is. Is it e.g. \\\"LLMs can implement complex strategies of deception/lying/etc.\\\"? If so, then I think this is not novel and also doesn't require the complexity used in the paper. I might be missing something here and am curious what the authors think.\", \"It might be that the optimization performed in the paper actually discovers interesting LLM behaviors and strategies, but this is hard to tell for me. I think I can see how the paper uncovers interesting behaviors within the setting studied here, i.e. when optimizing prompts, it's interesting that some amount of lying/deceiving gets reinforced, and that this game setup works in a sense and finds something like an equilibrium. But I haven't been convinced that this specific setup is interesting enough to study on its own\\u2014it seems too artificial to me to add a lot beyond either (i) the existing toy game theory setting on one hand, or (ii) just studying persuasion directly by prompting LLMs to write lying/deceptive/persuasive etc. texts.\"], \"questions\": [\"I have trouble understanding why the obedience constraint is used in this paper. As far as I can understand, one can simplify the BP game by assuming the sender just recommends the best possible action from the receiver's perspective, and then the problem becomes just choosing the best action for the sender to recommend, under the constraint that it must be optimal from the receiver's perspective (this assumes the receiver knows the sender's policy, which is the commitment constraint). Is this understanding correct? It seems that in this case, using the obedience constraint simplifies the game so much that one could have the LLM implement a simple (prompted) policy of either recommending and not recommending, and the obedience constraint makes sure this finds the right equilibrium. If the goal is to have a more realistic game where the text of the reference letter actually matters, then what does the obedience constraint do here? I might be misunderstanding something.\", \"In S1, the authors include a reward for the LLMs to give clearer signals. It seems that this basically is an ablation that forces the game back into a simple \\\"yes/no\\\" action space. It seems that the results here are similar to the S2 case where this reward isn't used. I am not sure this is a good or a bad sign\\u2014what is the takeaway from the S2 results? Is there anything going on under the hood that goes beyond a simple binary signal? (In a way that would be relevant to the game/optimization/etc.)?\", \"It would be nice to have some (possibly abbreviated/stylized) prompts and transcripts in the main body of the paper.\", \"If the prompt doesn't specify exactly how and when to lie, how can this still guarantee the commitment assumption?\", \"It might be that the most interesting result is S3, the iterated setting. However, the paper doesn't focus that much on it, and I think it would require more analysis to draw more interesting conclusions from this. Figure 12 might be useful here but from eyeballing it I don't really follow how it supports the hypothesis discussed in lines 473-476 in Section 4.2. (As a side note, I think Figure 12 would benefit from additional titles for the different settings. It's not easy to see graphically that these are for two difference settings, with two of the plots sharing the same subtitles.)\", \"Line 312 typo \\\"Either a limit on the allowable tree depth\\\" ... missing an or?\", \"Line 320 typo/grammar \\\"through prompt design or expand the receiver's inforset.\\\"\", \"Line 392 \\\"since we use aligned LLMs\\\"---previously the paper talks a lot about \\\"pretrained\\\" LLMs, which could be interpreted as saying these are base models rather than chat/alignment-finetuned LLMs. It might be worth replacing the \\\"pretrained\\\" terminology.\", \"What would you say is the most important takeaway/learning from the paper that would be interesting and useful to the community?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We sincerely thank the reviewer for their insightful feedback and thoughtful questions. We greatly appreciate the opportunity to clarify our work and provide further details regarding the methodology and its implications. In the following sections, we will address each specific question raised by the reviewer, offering detailed explanations and elaborating on the key aspects of our approach.\\n\\n---\\n\\n> **Q1: Clarification of Mapping to Mediator-Augmented Games:** *Regarding a), I don't see why and how the VBP setting can be mapped onto mediated-augmented games of Zhang et al 2024, where in a mixed-motive game with players, the game transform introduces a fictitious mediator player whose objective is to maximise an optimality objective while maintaining an equilibrium of the game. This is distinctly different from the authors proposed mapping where the mediator is played by the sender player, with the receiver the only other player. Among which players is the mediator player mediating between? Zhang et al 2024 also proposes a specific mediator player utility function from which convergence guarantee is derived. If I understood correctly the authors proposes a mapping where the sender acts as the mediator but retains its original utility function. Overall, I find a) tenuous and confusing. It would great if this can be clarified in rebuttal.*\\n\\nWe appreciate the reviewer's detailed feedback and would like to clarify the mapping of Verbalized Bayesian Persuasion (VBP) to Mediator-Augmented Games (MAG).\\n\\n1. **Reference to Zhang et al. (2022):** Our framework primarily follows the methodology outlined in Zhang et al. (2022), which provides several examples illustrating how Bayesian Persuasion (BP) can be modeled as a MAG. Therefore, we do not believe that our approach is \\\"distinctly different\\\" from the framework in Zhang et al., as the reviewer suggested.\\n2. **Specific Examples in Zhang et al. (2022):** In particular, Section 3.4, Table 1, and Appendix F of Zhang et al. (2022) offer detailed explanations of how BP can be formulated as a MAG problem, along with the corresponding equilibrium concepts. These sections directly support the idea that BP can be naturally mapped onto a MAG framework, consistent with our approach.\\n3. **Clarification of the Mediator's Role in VBP:** The reviewer's understanding is correct that in our VBP framework, the sender also plays the role of the mediator. However, the sender's utility function has been modified compared to traditional BP. Specifically, after transforming BP into a MAG, we apply the algorithm from Zhang et al. (2024), which reduces the problem to solving a two-player zero-sum game, as demonstrated in Equation 3 of Appendix B in our paper.\\n\\nWe hope this clarifies how our mapping aligns with the methodology from Zhang et al. (2022) and Zhang et al. (2024).\\n\\n---\\n\\n[**Zhang et al. (2022)**]: Brian Zhang and Tuomas Sandholm. Polynomial-time optimal equilibria with a mediator in extensive-form games. In NeurIPS, 2022.\\n\\n[**Zhang et al. (2024)**]: Brian Zhang, Gabriele Farina, Ioannis Anagnostides, Federico Cacciamani, Stephen McAleer, Andreas Haupt, Andrea Celli, Nicola Gatti, Vincent Conitzer, and Tuomas Sandholm. Computing optimal equilibria and mechanisms via learning in zero-sum extensive-form games. In NeurIPS, 2024a.\"}",
"{\"comment\": \"> Q16: Zhang et al. (2024) takes a game of interests (the original BP game in your application) and provides a specific game transform that turns it into a two-player zero-sum MAG. What do you mean by \\\"...after transforming BP Into a MAG, we apply the algorithm from Zhang et al\\\"? If you did transform the BP in a specific way, then the guarantees of Zhang et al should imply convergence in the transformed BP game, not the original BP game. Why is that a reasonable approach?\\n\\nThank you for your timely and insightful question. We appreciate the opportunity to clarify our approach regarding the transformation of the Bayesian Persuasion (BP) problem into a Mediator-Augmented Game (MAG) and our application of the algorithm from Zhang et al. (2024).\", \"to_address_your_question_in_detail\": \"1. **Algorithm Choice**: Due to considerations of computational complexity, we did not apply the **Direct Lagrangian algorithm** as described in Proposition 3.1 of Zhang et al. (2024). This direct method would indeed result in an exact transformation where the optimal solution of the transformed game (MAG) would be identical to the optimal solution of the original BP problem. However, given the computational cost, we opted for the **binary search-based algorithm** described in **Theorem 3.7** of Zhang et al. (2024) instead.\\n\\n2. **Approximation Gap**: The binary search-based algorithm introduces an approximation, where the solution to the transformed game is within a **2$\\\\varepsilon$** gap of the optimal solution to the original BP game. We explicitly mention this approximation in **Proposition 1** of our paper. While this means the equilibrium in the transformed MAG is not exactly the same as in the original BP game, the small approximation gap is a reasonable trade-off for the reduction in computational complexity.\\n\\n3. **Reasonableness of the Approach**: Given the guarantees provided by **Theorem 3.7** in Zhang et al. (2024), we acknowledge that the transformed game\\u2019s equilibrium is not identical to the original BP game\\u2019s equilibrium due to the approximation. However, the 2$\\\\varepsilon$ gap is sufficiently small for practical purposes, and we believe this trade-off is justified in our context. We have also clearly stated this approximation in our paper.\\n\\n**Revision Plan**:\\n\\nIn the revised version of the paper, we will further clarify that we opted for the binary search-based algorithm from **Theorem 3.7** of Zhang et al. (2024) due to its computational efficiency, and we will emphasize the approximation gap of 2$\\\\varepsilon$ between the solutions of the transformed and original games. This will ensure that readers fully understand the implications of this approach.\\n\\nThank you again for your thoughtful question, and we hope this explanation resolves any concerns regarding our methodology.\"}",
"{\"summary\": \"This paper focuses on the Bayesian persuasion problem, exploring its solution within a natural language framework. It introduces an interface for tackling Bayesian persuasion by integrating large language models (LLMs) with game-theoretic solvers. The authors empirically assess the effectiveness of the proposed method across three distinct settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper proposes a novel approach to solving Bayesian persuasion problems within natural language settings, providing a unified interface for game-theoretic solvers. The framework integrates several advanced techniques to effectively support a verbalized Bayesian persuasion model.\", \"weaknesses\": \"The paper does not clearly articulate the benefits of a verbalized Bayesian persuasion approach. The tasks discussed are highly simplified, which undermines the persuasive power of the work. In the method section, the description of the overall pipeline is vague, making it difficult to understand how the approach operates in detail. Additionally, as existing research has already explored Bayesian persuasion in natural language settings [1], such as applying Bayesian frameworks to enhance LLM performance in math and code generation, making contribution of the proposed method to the community appears limited.\\n\\n[1] Bai, Fengshuo, et al. \\\"Efficient Model-agnostic Alignment via Bayesian Persuasion.\\\" arXiv preprint arXiv:2405.18718 (2024).\", \"questions\": \"1. Could you provide a clearer explanation of the iterative process in your proposed method?\\n2. The \\u201clie\\u201d and \\u201chonest\\u201d probabilities in Figure 4 are somewhat confusing; could authors offer a more detailed description?\\n3. Figure 7 discusses variations in prompts, but the information presented is not clearly explained, and the analysis feels vague. Could you elaborate further on this?\\n4. Can your proposed method be applied to broader, real-world scenarios or other potential applications? If so, could you briefly describe how it might be applied and any potential challenges?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We sincerely thank the reviewer for their continued engagement and thoughtful feedback throughout the review process. Below, we address the specific concerns you raised:\\n\\n**On the utility function of the mediator and sender** \\n\\nWe appreciate your observation regarding the relationship between the mediator\\u2019s utility function and the sender\\u2019s utility function. To clarify, in our work, the mediator is modeled as equivalent to the sender. This equivalence ensures that the mediator\\u2019s utility function is inherently aligned with the sender\\u2019s utility function. However, to apply the methodology proposed in Zhang et al. (2024), we reformulated the sender\\u2019s utility function. Specifically, we transitioned from the utility function defined in Equation (1) of the main text to the reformulated version presented as Equation (3) in the appendix. This transformation enables the application of Zhang et al.\\u2019s approach while preserving the fundamental equivalence between the sender\\u2019s and mediator\\u2019s utilities.\\n\\n---\\n\\n**On the interpretation of BP as a MAG**\\n\\nWe understand and appreciate your concern about the interpretation of the Bayesian Persuasion (BP) setting as a mediator-augmented game (MAG) and the benefits of this perspective. Our motivation for modeling BP as a MAG is to establish theoretical results regarding the convergence of solutions to BP problems within the VBP) framework, and more specifically, using the PSRO (Policy Space Response Oracles) framework. To the best of our knowledge, existing theoretical results for PSRO do not directly apply to Bayesian Persuasion or other extensive-form games with imperfect information.\"}",
"{\"comment\": \"> **Q11: Strategic Cycles as an Alternative Explanation:** *L447: the authors suggested that the pattern of honesty rising, falling and then rising again validates the hypothesis that this is due to the use of aligned LLM. I would have thought that a simpler explanation is that the professor and the recruiter are simply in a strategic cycle? Would that not be a reasonable explanation to this phenomenon?*\\n\\nThank you for your insightful suggestion! We agree that the phenomenon of honesty rising, falling, and then rising again could indeed be explained by the presence of a **strategic cycle**, which aligns well with the characteristics of a bargaining game. This is a compelling alternative hypothesis and one that we had not fully explored in the original submission.\\n\\nIn response, we are conducting additional experiments to investigate the phenomenon further. Specifically, we are using an **unaligned LLaMA model** to see whether the same pattern of behavior (honesty oscillations) still occurs. This will help determine whether the pattern is due to using an aligned large language model (LLM), as we originally hypothesized, or whether it is more appropriately explained by strategic cycles in the interaction between the professor and the recruiter.\\n\\n**Revision Plan**:\\n\\nIn the revised version of the paper, we will include the results of these new experiments and discuss whether the pattern persists with an unaligned model. If strategic cycles are a more fitting explanation, we will update our discussion to reflect this alternative hypothesis.\\n\\nThank you again for your excellent suggestion, and we look forward to providing more detailed results in the revised paper.\\n\\n---\\n\\n> **Q12: Justification for Pruning to Top 10 Strategies:** *L483: '... at most the top 10 strategies with the highest probabilities', is this for computational reasons? Pruning actions by their support at a restricted game equilibrium could be problematic in general.*\\n\\nThank you for raising this important point! As you correctly noted, altering the support set of strategies can indeed impact the solution of game-theoretic problems. Our decision to prune the strategies to the top 10 was primarily motivated by the need to **reduce computational complexity**.\\n\\nHowever, it's important to clarify that the **Prompt-Space Response Oracle (PSRO)** leverages **OPRO** and **FunSearch** as the best response oracles. These oracles are not strict game solvers in the traditional sense but instead rely on the **innate human-like reasoning** embedded within large language models (LLMs) to approximate solutions. Given this nature, we initially hypothesized that reducing the number of prompts might not significantly affect the results, as the LLM's common-sense reasoning could compensate for the reduced strategy space.\\n\\nThat being said, we recognize that pruning strategies could still have an impact. To address this concern, we are currently conducting additional experiments where we vary the number of retained prompts to test whether this pruning affects the performance of the VBP framework in a significant way.\\n\\n**Revision Plan**:\\n\\nIn the revised version of the paper, we will include the results of these experiments and discuss whether the pruning of strategies to the top 10 has any adverse effects on the performance of VBP. If necessary, we will adjust our approach based on the findings to ensure we are not sacrificing solution quality for computational efficiency.\\n\\nThank you again for your insightful feedback, and we will ensure that this important point is addressed in the revised manuscript.\\n\\n---\\n\\n> **Q13: Redundancy in Figure 7:** *Column 'Converged prob' and the next column are redundant and could be consolidated to create space for larger fonts that are more readable?*\\n\\nThank you for your suggestion regarding the redundancy in Figure 7. We agree that the columns \\\"Converged prob\\\" and the next column can be consolidated as they present overlapping information.\\n\\nIn the revised version of the paper, we will remove the second-to-last column and enlarge the figure to improve readability. This will allow us to provide larger fonts and a clearer presentation of the data.\\n\\n---\\n\\n> **Q14: Clarification on Probabilities in Figure 7:** *How are the probabilities computed? Are they the average probability of taking over the pool of policies in the PSRO population?*\\n\\nThank you for your question regarding the probabilities in Figure 7. You are correct: the probabilities shown represent the **average probability of taking a strategy over the pool of policies in the PSRO population**.\\n\\nWe recognize that this was not clearly explained in the original submission, and we will add a detailed explanation of how these probabilities are computed in the revised version of the paper to ensure clarity.\\n\\nThank you again for your feedback, and we will make sure this clarification is included in the updated manuscript.\"}",
"{\"title\": \"Kind Reminder to Reviewers: Feedback on Rebuttal and Revised Manuscript\", \"comment\": \"We deeply appreciate the time and effort all reviewers have already dedicated to reviewing our work and providing constructive feedback. We kindly ask if you could take a moment to review our rebuttal and the updated manuscript, and consider whether any adjustments to your evaluation and scores might be appropriate based on our revisions.\"}",
"{\"comment\": \"> Q2: **Simplified Nature of the Tasks**: The tasks discussed are highly simplified, undermining the work's persuasive power. How do you justify the choice of these simplified tasks?\\n\\nThank you for your question regarding the simplified nature of the tasks we used in our experiments. We acknowledge that the tasks we chose\\u2014namely, the Recommendation Letter (REL) problem, the Courtroom (COR) problem, and the Law Enforcement (LAE) problem\\u2014may appear simplified at first glance. However, these problems have been widely studied in the Bayesian persuasion literature for many years and are considered canonical examples of strategic communication and decision-making under uncertainty.\", \"each_of_these_tasks_captures_essential_elements_of_real_world_scenarios_where_persuasion_plays_a_critical_role\": \"- **Recommendation Letter (REL) Problem**: This problem models the strategic communication between a professor and a hiring committee, and while the student\\u2019s quality is simplified to a binary classification (weak or strong), the core dynamics of persuasion remain highly relevant. The REL problem has been extensively studied (Dughmi, 2017) and is a foundational example of Bayesian persuasion in academic and hiring contexts.\\n\\n- **Courtroom (COR) Problem**: This problem, originally formulated by Kamenica & Gentzkow (2011), models the interaction between a prosecutor and a judge, where the prosecutor selectively presents evidence to influence the judge\\u2019s decision. While we simplified the courtroom investigation procedures for the sake of LLM processing, selective evidence presentation is a well-established and important aspect of real-world legal systems.\\n\\n- **Law Enforcement (LAE) Problem**: The LAE problem (Kamenica, 2019) models how law enforcement agencies can signal their presence to influence drivers' speeding behavior. Although simplified, this problem captures the strategic element of signaling and persuasion in regulatory and enforcement settings.\\n\\nThese three problems, while simplified in some respects, are **general enough** to capture the fundamental dynamics of Bayesian persuasion and have been studied extensively in the literature. They provide a solid foundation for evaluating our proposed verbalized approach because they represent well-understood benchmarks that allow us to test and compare our method's effectiveness in a controlled manner. Furthermore, the simplicity of the tasks enables us to isolate the performance of our natural language-based approach without introducing unnecessary complexity that might obscure the core contributions of our work.\\n\\nAdditionally, even these three classic tasks, when considered in more complex settings such as **multistage Bayesian persuasion (S3)**, cannot yet be fully solved by our method. To the best of our knowledge, solving these types of problems in such complex settings remains an **open problem** in the field. This highlights that while the selected tasks are foundational, significant work remains to be done in scaling these methods to more complex, real-world applications.\\n\\nIn summary, we selected these tasks not because they are trivial but because they offer well-established, generalizable models for studying persuasion, and the community has validated them over many years. Solving these classic BP problems in a natural language domain is an important step toward applying more sophisticated persuasion techniques in real-world scenarios. Furthermore, addressing these problems in more complex settings remains an active area of research and an open challenge in the field.\"}",
"{\"comment\": \"Could the authors confirm that the utility function of the *mediator* is identical to the sender's utility function?\\n\\nI agree that in a strict technical sense a BP setting *could* be interpreted as a mediator-augmented game; however, it remains unclear why we should interpret BP as a MAG. What specific benefit do you derive by making this connection? \\n\\nIn Zhang et al. (2022, 2024) the mediator has a specific construction for its utility function, which allows for the selection of an optimal equilibrium. That is the key benefit of that line of works. Here I don't see this benefit playing out.\"}",
"{\"comment\": \"> **Q15:** My understanding of Zhang et al. (2022) in the context of BP is that you would construct a fictitious mediator player that plays against a team of deviator players (both sender and receiver). Reaching an equilibrium in the transformed game would then reveal an equilibrium (of a certain type) in the original BP game. Please clarify why and how in your VBP framework the sender can be the mediator and still benefit from results from Zhang et al. (2022).\\n\\nThank you for your timely and insightful question. We appreciate the opportunity to clarify our reasoning regarding the sender's role as a mediator in our Verbalized Bayesian Persuasion (VBP) framework and the applicability of the results from Zhang et al. (2022).\\n\\nIn our interpretation of **Mediator-Augmented Games (MAG)**, it is permissible to model a scenario where only one player is in the game, or more specifically, where one of the players in a two-player game is modeled as the **mediator**. Zhang et al. (2022) and (2024) support this interpretation in several parts of their work:\\n\\n1. In the **Application and Related Work** section of Zhang et al. (2022) , they mention: \\n _\\u201cPersuasion in games [17, 3, 23, 14, 30]. The mediator (in that literature, usually the \\u2018sender\\u2019) has more information than the players (\\u2018receivers\\u2019) and wishes to tell information to the receivers so as to persuade them to act in a certain way.\\u201d_ \\n This aligns with our approach, where the sender plays the role of the mediator by having informational advantages and attempting to influence the receiver.\\n\\n2. In **Appendix F** of Zhang et al. (2022) , under the section on Automated Multi-Stage Bayesian Persuasion (Information Design),** they state: \\n _\\u201cIn Bayesian persuasion, also commonly referred to as information design [17], the roles of the mediator and player are reversed compared to automated mechanism design: the mediator (\\u2018principal\\u2019) has informational advantage, and the player(s) take the actions.\\u201d_ \\n This further corroborates our use case, as the sender (mediator) influences the actions of the receiver (player).\\n\\n3. In **Definition 2.1** of Zhang et al. (2024) , they specify that **n** (the number of players) can equal 1, indicating that it is possible to have a single player in the game, which supports the idea of modeling the sender as a mediator.\\n\\n4. Finally, in **Appendix B** of Zhang et al. (2024) , they mention: \\n _\\u201cMoreover, in our formulation the mediator has the power to commit to a strategy. As such, our results also relate to the literature on learning and computing Stackelberg equilibria [8, 35, 66, 84, 20], as well as the work of Camara et al. [15], which casts mechanism design as a repeated interaction between a principal and an agent.\\u201d_ \\n This highlights that the mediator can commit to a strategy, which is crucial in our VBP framework, where the sender (mediator) commits to an signaling scheme to influence the receiver\\u2019s actions.\\n\\nThus, in our VBP framework, the sender can act as the mediator and still benefit from the theoretical results of Zhang et al. (2022), as the framework allows for such modeling where the sender, with informational advantages, influences the receiver (player). These references from Zhang et al. (2022) strongly support our approach, and we will clarify this aspect in the revised version of the paper.\\n\\n**Revision Plan**:\\n\\nIn the revised version, we will explicitly reference these parts of Zhang et al. (2022) and (2024) to make it clear why modeling the sender as the mediator is valid and how the results from Zhang et al. can still be applied in our VBP framework.\\n\\nThank you again for your valuable feedback.\"}",
"{\"summary\": \"The paper extends a classical Bayesian Persuasion (BP) framework by incorporating more realistic and complex interactions through natural language. The proposed Verbalized Bayesian Persuasion (VBP) framework builds upon various existing techniques and introduces a two-player game in which both the sender and receiver interact through a large language models (LLM). Signal optimization is achieved through prompt optimization using existing methods.\\n\\nThe framework is tested across three scenarios with incrementally complex settings (S1, S2, S3), utilizing Llama 3.1-8b as LLM.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The addressed problem is interesting, and leveraging large language models (LLMs) to model and solve a persuasion problem using natural language appears promising.\", \"The paper is well-organized overall and effectively integrates several approaches and techniques to extend the Bayesian Persuasion (BP) framework into a more realistic and complex scenario.\"], \"weaknesses\": [\"The optimization of the LLM prompts is not sufficiently detailed, particularly regarding the categories and content used in the prompt (see Q1).\", \"An anonymized repository containing the code and data for reproducibility is missing, although the authors provide guidelines and reference an existing repository.\", \"**Minor Comments:**\", \"Typo: \\\"Inforset\\\" should be \\\"Infoset\\\" I guess,\", \"In Section 2.3, PSRO is used to refer to two different concepts.\"], \"questions\": \"**Q1:** Are the categories and content of the key prompts exhaustively presented in Figure 7? For instance, regarding the writing style and the category \\\"Tone,\\\" is the content \\\"Positive\\\" fixed?\\n\\n**Q2:** Is it trivial that the chart (d) shows the Honest probability as always 1.0? Under what circumstances would a sender have an incentive to lie about a strong candidate?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"> **Q5: Mixed-Motive Games and PSRO Convergence:** *L118-L130: PSRO in the limit converges to a Nash equilibrium out of many. In mixed-motive games, NE needs not be unique and solutions are not interchangeable. Perhaps this could be a relevant point of discussion especially since VBP seems to be primarily dealing with mixed-motive games?*\\n\\nThank you for your thoughtful comments on the uniqueness of Nash equilibria in mixed-motive games or Bayesian correlated equilibria in the Bayesian persuasion. We fully agree with your observation that equilibria are often not unique in such games, and different equilibria can lead to distinct outcomes that are not interchangeable. This is indeed an important point in the theoretical understanding of mixed-motive games.\\n\\nHowever, we would like to clarify that the primary focus of our work is not on addressing the issue of non-unique or non-interchangeable equilibria. Instead, our emphasis is on evaluating the **effectiveness** and **generality** of the proposed Verbalized Bayesian Persuasion (VBP) framework. Specifically:\\n\\n1. **Effectiveness of VBP:** We are primarily concerned with whether VBP can effectively solve Bayesian persuasion problems. To demonstrate this, we compare the performance of VBP against classic BP solvers. Our results highlight that VBP can reliably solve these problems.\\n\\n2. **Generality of VBP:** Another key focus is the generality of the VBP framework\\u2014whether it can handle more complex scenarios, such as the problems in S2 and S3 settings. These settings extend beyond the traditional BP framework and introduce additional challenges not typically addressed by classic solvers. Our results show that VBP can solve these more intricate problems, further validating its applicability.\\n\\nWe recognize that the issue of non-unique and non-interchangeable equilibria is relevant and could impact the broader understanding of mixed-motive games or Bayesian persuasion. However, in this work, our primary goal was to propose a framework that is both effective and general in its ability to solve real-world Bayesian persuasion problems. While important, addressing the non-uniqueness and interchangeability of equilibria was not the central focus of our study.\\n\\n**Future Work**:\\n\\nWe have carefully considered the reviewer\\u2019s suggestion and agree that exploring the non-uniqueness of equilibria could offer valuable insights. Specifically, **we plan to incorporate the Price of Anarchy (PoA) as an optimization objective** in future iterations of the VBP framework. By introducing PoA, we aim to quantify the efficiency loss caused by selecting suboptimal equilibria, thereby guiding the framework toward equilibria that minimize this inefficiency. This would allow us better to understand the trade-offs between different equilibria in mixed-motive games and improve the solution quality of VBP when multiple equilibria exist.\\n\\nBy adding PoA as an explicit optimization objective, we can move beyond simply finding any equilibrium and instead focus on equilibria that are optimal in terms of both efficiency and strategic outcomes. This enhancement directly addresses the issue raised by the reviewer and reflects our commitment to further refining the VBP framework based on this valuable feedback.\\n\\nWe appreciate the reviewer\\u2019s thoughtful comment and will ensure that this aspect is a key focus in future extensions of our work.\"}",
"{\"comment\": \"> **Q4: Rephrasing of State Transition Influence:** *L46: 'Notably, the designer must ... that influence state transition', this is difficult to follow. Perhaps worth rephrasing?*\\n\\nThank you for pointing out the difficulty in understanding the phrasing around the influence on state transitions. We agree that this part could benefit from rephrasing for clarity, and we appreciate the opportunity to explain the underlying concept more clearly.\\n\\nIn the context of **multi-agent reinforcement learning (MARL)**, there are two main approaches for influencing the behavior of agents: **mechanism design** and **information design**.\\n\\n1. **Mechanism Design:** This approach primarily works by modifying the reward functions of agents. By changing the rewards, the designer indirectly influences the agents' future behaviors by pushing them to optimize their strategies differently. However, the effect of modifying the reward function is not immediate\\u2014agents need to optimize their strategy based on the new reward structure, and the changes will only be reflected in the subsequent sampling of actions in the next episode or round. This tends to reduce the complexity of the problem since the impact on state transitions is indirect and delayed.\\n2. **Information Design:** On the other hand, information design involves modifying the observation functions of the agents, which directly affects the actions they take in the current episode. Since the state transition function in MARL depends on the current state and the actions taken by agents, altering what agents observe can have an immediate and more direct effect on the state transitions within the same episode. This introduces more uncertainty and complexity, as the altered observations influence the agents' subsequent actions in real time.\\n\\nThus, the distinction we aimed to make is that **information design** has a more immediate and direct impact on state transitions due to its influence on actions within the same episode. In contrast, **mechanism design** has a more delayed and indirect effect, as it only impacts actions after agents have optimized their strategies in response to the modified rewards.\\n\\n**Revision Plan**:\\n\\nIn the revised version of the paper, we will reorganize this explanation to make it clearer. Specifically, we will highlight how **mechanism design** and **information design** operate differently in their ability to influence state transitions, emphasizing the immediacy of their effects on agent behavior. We will also aim to simplify the language to ensure the explanation is easy to follow.\\n\\nWe appreciate the reviewer bringing this to our attention and will ensure that the revised text clarifies this distinction more effectively.\"}",
"{\"comment\": \"> Q5: **Explanation of Probabilities in Figure 4**: The \\u201clie\\u201d and \\u201chonest\\u201d probabilities in Figure 4 are somewhat confusing; could you offer a more detailed description?\\n\\nThank you for your question regarding the probabilities of \\\"lie\\\" and \\\"honest\\\" in Figure 4. We understand that this aspect of the figure may have been confusing, and we appreciate the opportunity to clarify.\\n\\nIn the context of the **three classic BP problems** (Recommendation Letter, Courtroom, and Law Enforcement), the \\\"lie\\\" and \\\"honest\\\" probabilities refer to the likelihood of the sender (information designer) providing an accurate or deceptive signal to the receiver. Here's a more detailed breakdown:\\n\\n- **Lie Probability**: This represents the probability that the sender chooses to **misrepresent** the true state of the environment. For example:\\n - In the **Recommendation Letter (REL) problem**, this would mean the professor describes a **weak** student as **strong**.\\n - In the **Courtroom (COR) problem**, the prosecutor describes an **innocent** defendant as **guilty**.\\n - In the **Law Enforcement (LAE) problem**, the police signal that an **unpatrolled** road segment is **patrolled**.\\n\\n- **Honest Probability**: This is the probability that the sender provides an **accurate** description of the environment. For example:\\n - In the **REL problem**, the professor accurately describes a **strong** student.\\n - In the **COR problem**, the prosecutor accurately describes a **guilty** defendant.\\n - In the **LAE problem**, the police signal correctly whether a segment of the road is **patrolled**.\\n\\nThese probabilities are determined based on the sender's strategy in the Bayesian persuasion framework, and they help quantify how often the sender is truthful versus deceptive in each scenario.\\n\\n**Estimation of Probabilities:**\\nThe probabilities of lying and honesty in Figure 4 are **empirically estimated** through simulations. Specifically, we use **20 random seed samplings** to generate a distribution of outcomes, which allows us to calculate the average lie and honesty probabilities across multiple runs. This sampling-based approach ensures that the estimates are robust and not overly sensitive to a single trial or random fluctuation.\\n\\n**Summary:**\\nIn short, the \\\"lie\\\" and \\\"honest\\\" probabilities reflect the sender's behavior regarding truthfulness or deception in the three BP scenarios. The probabilities are estimated based on repeated simulations (20 random seeds), accurately measuring how often the sender chooses to lie or be honest under different conditions in each scenario. We hope this clarification helps, and we can update the paper to make this explanation clearer in the revised version.\"}",
"{\"comment\": \"> **Q11: Clarification on typographical error in Line 312** *\\\"Line 312 typo 'Either a limit on the allowable tree depth' ... missing an or?\\\"*\\n\\nThank you for pointing out the typographical error on **Line 312**. This was an oversight on our part, and we will correct it in the **revised version** of the paper by adding the missing **\\\"or\\\"**.\\n\\nWe appreciate your attention to detail and will ensure that this is addressed in the updated manuscript. Thank you again for your careful review!\\n\\n---\\n\\n> **Q12: Clarification on typographical error in Line 320** *\\\"Line 320 typo/grammar 'through prompt design or expand the receiver's inforset.'\\\"*\\n\\nThank you for pointing out the typographical error on **Line 320** regarding the term \\\"inforset.\\\" This was an oversight, and we will correct it to **\\\"infoset\\\"** in the revised version of the paper.\\n\\nWe appreciate your attention to this detail and will ensure the correction is made in the updated manuscript. Thank you again for your careful review!\\n\\n---\\n\\n> **Q13: Clarification on terminology consistency regarding LLMs** *\\\"Line 392 'since we use aligned LLMs'---previously the paper talks a lot about 'pretrained' LLMs, which could be interpreted as saying these are base models rather than chat/alignment-finetuned LLMs. It might be worth replacing the 'pretrained' terminology.\\\"*\\n\\nThank you for your insightful suggestion regarding the terminology used for LLMs in the paper. We agree with your point that the term **\\u201cpretrained\\u201d** might be interpreted as referring to base models rather than models that have undergone further alignment fine-tuning.\\n\\nIn response to this, we will update the terminology to **\\u201cpretrained and aligned LLMs\\u201d** in the **revised version** of the paper to ensure consistency and clarity.\\n\\nWe appreciate your attention to this, and we are confident that this change will improve the precision of the terminology. Thank you again for your helpful feedback!\\n\\n---\\n\\n> **Q14: Clarification on the most important takeaway of the paper** *\\\"What would you say is the most important takeaway/learning from the paper that would be interesting and useful to the community?\\\"*\\n\\nThank you for your thoughtful question regarding the most important takeaway of the paper. We believe our work provides two significant contributions to the community:\\n\\n1. **VBP Framework for Real-World Bayesian Persuasion Problems**\\n First, our **Variational Bayesian Persuasion (VBP)** framework enables the study of a wide variety of real-world Bayesian persuasion (BP) problems. By simply inputting different prompts to the large language model (LLM), we can specify diverse scenarios that involve different human roles, personalities, and contexts. Moreover, the game solver provided by VBP ensures a solution with **convergence guarantees**, offering a systematic approach to finding high-quality solutions for complex BP problems.\\n2. **Iterated Setting (S3) Insights**\\n While the iterated setting (**S3**) provides interesting insights, we acknowledge that this aspect remains more speculative and opens up potential avenues for future research. The results suggest that in practical BP problems, the receiver might have more flexibility than previously assumed in classical BP models. This observation could point towards a richer interaction model, but further investigation is required to fully understand its implications. We chose not to explore this in-depth in the current paper, as it slightly deviates from our core focus.\\n\\nIn summary, the main takeaway from the paper is the **flexibility and effectiveness** of the VBP framework in addressing real-world BP problems, along with some **preliminary insights** from the iterated setting that invite further exploration. Thank you again for your question, and we hope this clarifies the key contributions of our work.\"}",
"{\"comment\": \"> **Q2: Critical Elements of the VBP Framework:** *Regarding b), there are, several methods that have been described here and it's not clear which ones are critical elements of the VBP framework. Among these, PSRO provides convergence guarantee (in a specific sense), yet the writing and Figure 3 would suggest that the convergence guarantee comes from the mediator-augmented game formulation. Overall, I would have appreciated a more succinct description of the framework with its necessary components instead of a juxtaposition of several rather sophisticated methods whose necessities in the framework remain unclear.*\\n\\nThank you for your insightful comments on the framework's clarity and the critical components of the VBP methodology. We want to address the concerns regarding the convergence guarantees and the role of the various algorithms in the framework.\\n\\n1. **Convergence Guarantees and Role of MAG:** Mediator-augmented games (MAG) serve as a game definition framework but do not inherently provide convergence guarantees. To solve VBP, we incorporate the binary search-based algorithm proposed by Zhang et al. (2024), specifically Algorithm 1, from their work. This algorithm has been proven to converge to a Bayes-correlated equilibrium. It is important to note that this algorithm functions as a template, requiring a game solver as a key component. In our work, we instantiate the game solver as a variant of PSRO, referred to as the Prompt-Space Response Oracle. Overall, the theoretical results in Proposition 1 of our paper are built upon the theoretical results of Zhang et al. (2024) and the binary search-based algorithm they proposed.\\n2. **Clarification of Framework Components:** We acknowledge that the presentation in the paper may have caused some confusion, and we apologize for any lack of clarity. The key components of our framework are not overly complex, but the structure could have been more clearly laid out. Specifically, we model the verbalized BP problem as a MAG and then solve it using a prompt-space response oracle framework. The core of this framework is the selection of the best response oracle. For settings S1 and S2, we utilize the OPRO algorithm as the oracle, while for S3, we employ FunSearch. The introduction of FunSearch is necessary due to the multi-stage nature of S3, which requires more complex, history-dependent prompts. In this case, we generate conditional prompt functions using large language models (LLMs) and apply them to concrete historical information to generate the appropriate prompt.\\n\\nWe hope this explanation clarifies the structure of the VBP framework and the necessity of the methods included.\"}",
"{\"comment\": \"> **Q3: Clarification on the complexity of the BP game and whether it\\u2019s the right testbed** *\\\"It seems that in the end the way the game is setup, it doesn't really matter, for instance, whether the rec letters are actually written eloquently or not. I might be missing something, but it feels like somehow the simple BP games are not really the right testing ground for studying LLM persuasion, because from a game theory perspective, neither the sender nor receiver gain anything by using more than a binary signal/policy.\\\"*\\n\\nThank you for raising the concern regarding whether the classical Bayesian persuasion (BP) game setup is the right testbed for studying persuasion with LLMs. From a game theory perspective, we acknowledge your point that the sender and receiver might not gain much from going beyond a binary signal/policy in the classic BP framework. However, we would like to clarify the rationale behind our choice of classical BP games and how our work extends beyond the limitations of this idealized scenario.\\n\\n1. **Classical BP as a Baseline for Validating the Approach**\\n Our work uses classical BP problems as a **first step** toward solving real-world persuasion problems using LLMs. The primary goal here is to demonstrate the **effectiveness of the algorithm** in a structured and well-understood environment. By starting with classical BP problems, we can benchmark our methods against known optimal solvers from the game theory literature. This allows us to validate the correctness and performance of our approach in a controlled setting before extending it to more complex and realistic scenarios.\\n\\n2. **Moving Beyond Idealized BP Games**\\n While we agree that classical BP games may simplify the interaction to binary signals or policies, **real-world persuasion** involves far more complexity due to factors such as:\\n\\n - **Ambiguity, implicit meaning, and vagueness** in natural language.\\n - **Human bounded rationality**, which means that real-world decisions are not always made based on perfectly rational or optimal strategies.\\n\\n Our work, particularly by introducing **VBP (Variational Bayesian Persuasion)**, aims to address these complexities by leveraging LLMs. The ultimate goal of VBP is to explore whether LLMs can handle real-world persuasion tasks that deviate from the idealized assumptions of classical BP games. With their natural language capabilities, LLMs are uniquely positioned to navigate these \\\"non-ideal\\\" circumstances where communication goes beyond binary signals to involve nuanced persuasion strategies.\\n\\n3. **Real-World Applications of LLM-Based Persuasion**\\n To better illustrate the relevance of LLMs in persuasion tasks, consider real-world applications such as **live-streaming e-commerce** or **conversational recommendation systems**. In these scenarios, LLMs (e.g., digital sales agents) replace human salespeople to persuade customers to purchase products. These interactions are rich in language\\u2014containing ambiguity, persuasion strategies, and implicit suggestions\\u2014which cannot be captured by simple binary policies. Using LLMs in such tasks demonstrates the importance of moving beyond classical BP games to study more complex forms of persuasion in realistic settings.\\n\\n For more details on real-world applicability, we refer to our response to Reviewer gJz3's **Q7: Real-world Applicability**, which outlines further examples of how LLMs might be applied in practical persuasion scenarios.\\n\\n4. **Future Directions**\\n While our current study demonstrates the feasibility of applying LLMs to classical BP problems, we acknowledge this is just a first step. Our future work will focus on adapting these methods to more realistic persuasion problems where natural language is critical, and the sender and receiver may engage in more complex, multi-turn interactions.\\n\\nThank you for your insightful comments, and we hope this clarifies the purpose and scope of our study.\"}",
"{\"comment\": \"> Q3: **Vagueness in Method Description**: The description of the overall pipeline in the method section is vague. Can you provide a more detailed explanation of how your approach operates, particularly clarifying the specifics of the pipeline?\\n\\nThank you for your question regarding the vagueness in the method section. We will provide a more detailed explanation of the overall pipeline, based on the description in Figure 2 of our paper, and clarify how our approach operates.\\n\\n1. **Sampling Process (from a Reinforcement Learning perspective)**\\n\\nThe pipeline operates as follows, with terminology and structure drawn from reinforcement learning (RL):\\n\\n- **Sender's Signal Generation**: As depicted on the left side of Figure 2, the **sender** (represented by a pre-trained large language model, or LLM) first determines its **signaling scheme**, which is effectively an optimized prompt. This prompt is designed to communicate with the receiver.\\n\\n- **Observation and Signal Transmission**: After observing the true state of the environment, the sender generates a signal based on its signaling scheme and sends this signal to the receiver. In our setup, this signal is produced as a natural language response from the LLM, shaped by the sender's prompt.\\n\\n- **Receiver's Decision**: The **receiver** (also a pre-trained LLM) receives this signal and the sender's signaling scheme. The receiver then makes a decision based on both the signal and the signaling scheme. The receiver\\u2019s decision is also generated through an LLM prompt, which contains its optimized portion and the input from the sender (i.e., the signal and the signaling scheme).\\n\\n- **Calculation of Rewards**: After the receiver makes its decision, the environment computes the rewards for both the sender and the receiver. This feedback is critical for optimizing their strategies.\\n\\n2. **Optimization of Sender and Receiver Strategies**\\n\\nWe illustrate the strategy optimization process on the right side of Figure 2. This framework is largely based on the **Policy Space Response Oracle** architecture but with several key differences:\\n\\n- **Strategy as Prompt Optimization**: In our approach, the sender and receiver strategies are encoded as prompts fed into the LLMs. Therefore, the process of optimizing their strategy is transformed into **prompt optimization**. Instead of optimizing traditional policies or strategies as in RL, we focus on fine-tuning the prompts given to the LLM.\\n\\n- **Replacement of Best Response Oracle**: In the Policy Space Response Oracle framework, the best response oracle is typically implemented using gradient-based reinforcement learning methods. Our approach replaces this with optimization algorithms tailored for large language models, such as **OPRO** or **FunSearch**. These methods focus on optimizing the prompts to improve the sender and receiver's strategies through language model interactions rather than gradient-based policy optimization.\\n\\n- **Meta-Game Simulation**: The sampling process within the meta-game simulation is adapted to the natural language framework. The sampling now follows the abovementioned process, where sender and receiver prompt interactions are simulated to gather data for strategy evaluation and optimization.\\n\\nThe remaining parts of the pipeline align with the standard PSRO framework, including using a **meta-strategy solver** to identify optimal strategies based on the sampled data.\\n\\n**Additional Clarifications**\\n\\nWe acknowledge that the original explanation in the paper may have been too high-level, and we will include a more detailed breakdown of the process in the revised version. To further aid understanding, we will also provide a pseudocode that clearly illustrates the steps involved in the sampling and optimization processes.\\n\\nIn summary, our pipeline is the transformation of traditional game-theoretic strategies into prompt-based strategies for LLMs. This approach allows us to adapt the powerful Policy Space Response Oracle framework to the natural language domain, where sender and receiver strategies are defined as optimized prompts, and best response oracles and reward calculations are handled using LLMs rather than traditional RL methods. We hope this clarifies the specifics of our method.\"}",
"{\"comment\": \"> Q4: **Distinguishing from Existing Research**: Existing research has already explored Bayesian persuasion in natural language settings. How does your approach differ from or improve upon existing methods, such as the work cited by Bai et al. (2024)?\\n\\nThank you for your question regarding how our approach differs from or improves upon existing work, such as the study by Bai et al. (2024). We want to clarify that the two works fundamentally differ in their goals, methods, and applications despite both leveraging the concept of Bayesian persuasion (BP) in some form.\\n\\n**Key Differences:**\\n\\n1. **Problem Focus**:\\n - **Our Work**: Our paper focuses on advancing the **Bayesian persuasion (BP) framework itself** by integrating it into natural language settings. We propose a **verbalized BP (VBP) framework** that extends classic BP to real-world scenarios involving human dialogues. Our primary goal is to solve BP problems in contexts where communication and persuasion occur through natural language, which is a major departure from traditional BP models that rely on simplified, scalar, or vector-based information structures.\\n - **Bai et al. (2024)**: Bai et al., on the other hand, use BP as a **tool for model alignment**. Their work leverages a form of classic BP (non-verbalized) to optimize the alignment of large language models (LLMs) with human intent. They formalize the alignment problem as an optimization of the signaling strategy from a smaller model (Advisor) to improve the responses of a larger model (Receiver). Their focus is on improving model performance in downstream tasks (e.g., mathematical reasoning, code generation) using BP within the context of model alignment.\\n\\n2. **Nature of BP Problem**:\\n - **Our Work**: We address the BP problem itself, particularly how it can be applied in **natural language settings**. Our framework involves real-world dialogue situations where the information designer (mediator) and the receiver are instantiated by LLMs, and strategic communication happens via **natural language** rather than abstract signals. This is the first attempt to extend BP into complex verbal communication scenarios that are more representative of real-world interactions.\\n - **Bai et al. (2024)**: Bai et al. still operate within the realm of **classic, non-verbalized BP**. Their work focuses on optimizing a signaling strategy to improve downstream task performance. Still, the communication between the Advisor (small model) and the Receiver (large model) is not in the form of natural language persuasion. Instead, it involves manipulating information in a structured way to enhance model responses.\\n\\n3. **Methodology**:\\n - **Our Work**: We propose a novel method to solve BP in natural language by transforming agents' policy optimization into **prompt optimization**. We introduce a generalized equilibrium-finding algorithm with a convergence guarantee to solve the BP problem within the language space. This allows us to address more complex, multistage BP scenarios that traditional methods cannot handle.\\n - **Bai et al. (2024)**: Bai et al. use BP as a framework to align models, relying on a **model-agnostic Bayesian persuasion alignment** approach. They optimize signals sent from a smaller model to a larger model, improving performance across tasks such as mathematical reasoning and code generation. Their focus is on **efficiency** in model alignment rather than solving BP problems in real-world dialogue settings.\\n\\n**Summary:**\\n\\nWhile both works touch on Bayesian persuasion, our approach is fundamentally different from Bai et al. (2024) in several ways. We focus on extending and solving the BP problem **itself**, specifically in **natural language settings**. In contrast, Bai et al. use classic BP as a **tool** for improving model alignment in downstream tasks. Our work contributes to the field by developing a verbalized BP framework for real-world, dialogue-based applications. At the same time, Bai et al. aim to enhance model performance through BP-driven alignment strategies in structured tasks like math and code generation.\\n\\nTherefore, our work addresses a completely different problem space and offers novel contributions to the study and application of Bayesian persuasion. We will supplement the discussion with the work of Bai et al. in the revised version.\"}",
"{\"metareview\": \"This paper proposes a verbalized Bayesian persuasion framework using LLMs to model strategic communication in natural language settings, introducing techniques for optimizing prompts through game-theoretic approaches. Despite some promising empirical results, the majority of reviewers recommended rejection due to unclear novelty, unclear differentiation from existing work, and inadequate justification of the framework's necessity.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raise concerns about the work's model formulation, theoretical analysis, and practical value. Specifically, PcFV questioned the mapping to mediator-augmented games and how convergence guarantees transfer, while gJz3 and tm2K found the tasks overly simplified and questioned the framework's practical applications. Though the authors provided extensive responses and revised several sections (including reorganizing content and adding new discussions on real-world applications), they did not fully address the fundamental concerns about theoretical foundations and the practical necessity of their complex framework.\"}",
"{\"comment\": \"I thank the reviewers for their answers\"}",
"{\"comment\": \"> **Q10: Request for more analysis on S3, the iterated setting, and clarification of Figure 12** *\\\"It might be that the most interesting result is S3, the iterated setting. However, the paper doesn't focus that much on it, and I think it would require more analysis to draw more interesting conclusions from this. Figure 12 might be useful here but from eyeballing it I don't really follow how it supports the hypothesis discussed in lines 473-476 in Section 4.2. (As a side note, I think Figure 12 would benefit from additional titles for the different settings. It's not easy to see graphically that these are for two difference settings, with two of the plots sharing the same subtitles.)\\\"*\\n\\nThank you for bringing up this important point regarding the results from **S3 (the iterated setting)** and the need for additional analysis. We agree that S3 presents some of the most interesting dynamics in our study, particularly in how it reveals deeper bargaining interactions between the sender and receiver. However, as these results open up new research directions, we have intentionally kept the analysis in this paper somewhat limited, with plans to explore it further in future work.\\n\\n1. **Iterated Setting and Bargaining Dynamics**\\n As you pointed out, in classical persuasion theory, one of the key assumptions is that the sender must commit to a signaling strategy upfront and follow through with it during the interaction. The justification for this commitment, particularly in **long-term interactions**, is that the sender has an incentive to maintain their **reputation**, ensuring that the receiver continues to trust them.\\n\\n Classical analyses suggest that, given the sender\\u2019s commitment, the receiver has no incentive to deviate from following the strategy, since doing so would harm their own expected utility. The receiver, therefore, accepts the expected payoff associated with the sender's signal.\\n\\n2. **New Insight from S3**\\n However, the results from **S3** suggest a more complex dynamic. In this iterated setting, we observe that the receiver can **choose to ignore the sender\\u2019s signals**, effectively rendering the sender\\u2019s commitment meaningless. This means that the sender\\u2019s commitment must be **accepted by the receiver** for it to hold. If the receiver disagrees with the sender\\u2019s proposed strategy, they can force both parties into a **mutually worse outcome** by disregarding the signals entirely.\", \"this_observation_leads_to_a_new_hypothesis\": \"in the **VBP framework**, Bayesian persuasion may be **equivalent to a bargaining game**. In such a game, the sender\\u2019s commitment is no longer unilateral. Instead, both parties must reach an agreement on the signaling strategy, or the interaction will lead to suboptimal outcomes for both.\\n\\n We acknowledge that this hypothesis deviates from the primary focus of the paper, which is why we did not delve deeper into it in the current work. However, this insight opens up an exciting avenue of research that we hope to explore in future studies.\\n\\n3. **Clarification of Figure 12**\\n Regarding **Figure 12**, we appreciate your feedback about its presentation. The figure is indeed intended to illustrate the dynamics of two different settings, and we agree that it would benefit from clearer titles and labels to distinguish these settings more effectively. We will revise the figure in the **updated version** of the paper to:\\n\\n - Include **clearer titles** for each plot, indicating the specific settings being compared.\\n - Ensure that the graphical differences between the settings are more apparent.\\n\\nIn summary, while we acknowledge that the S3 results are highly interesting and open up new research possibilities, we chose to limit our discussion of them in the current paper to stay focused on the primary contributions. We agree with your suggestion that **Figure 12** should be clarified and will make the necessary revisions to ensure it better supports the discussion of the iterated setting. Thank you again for your valuable feedback, and we look forward to exploring these ideas further in future work.\"}",
"{\"comment\": \"We sincerely thank the reviewer for their insightful feedback and thoughtful questions. We greatly appreciate the opportunity to clarify our work and provide further details regarding the methodology and its implications. In the following sections, we will address each specific question raised by the reviewer, offering detailed explanations and elaborating on the key aspects of our approach.\\n\\nAdditionally, we will make minor corrections, such as fixing the identified typo and clarifying the use of terminology (e.g., PSRO) in the final version. Furthermore, we understand the importance of reproducibility and have prepared an anonymized version of the code and data repository, which will be made publicly available upon the paper's acceptance to ensure full replicability of our results.\\n\\n---\\n\\n> Q1: **Clarification on Figure 7 (Prompt Categories)**: Are the categories and content of the key prompts exhaustively presented in Figure 7? For instance, regarding the \\\"Tone\\\" category, is the \\\"Positive\\\" content fixed or variable?\\n\\nWe appreciate the question and want to clarify the information in Figure 7. The figure does not fully display all possible prompts used in the optimization. Instead, it shows a subset of the top 10 categories with the highest selection probabilities from the strategy (or prompt) pool, along with the most probable content under each category.\\n\\nWe adopt a **hierarchical optimization strategy** using the OPRO algorithm during the prompt optimization phase. This process first optimizes the categories, and afterward, the content within each category is optimized. When the sender or receiver ultimately uses the prompt, the category is probabilistically sampled from the policy pool, and within that category, the content with the highest probability is selected. This method allows us to maintain a balance between prompt diversity and computational tractability, ensuring that the prompts used in the final execution are both optimized and diverse.\\n\\nTo address the specific question about the \\\"Tone\\\" category, the \\\"Positive\\\" content is not fixed during the optimization process. After the optimization, it is selected as the most probable content within that category. We hope this clarifies the hierarchical nature of the prompt optimization process and the reasoning behind the selection shown in Figure 7.\\n\\n--- \\n\\n> Q2: **Honest Probability in Chart (d)**: In the chart (d), the \\\"Honest\\\" probability is consistently shown as 1.0. Could you clarify why this is the case, and under what circumstances would a sender be incentivized to lie, especially when discussing a strong candidate?\\n\\n\\n\\nThank you for this insightful question. We want to clarify why the \\\"Honest\\\" probability is consistently shown as 1.0 in the chart (d) and explain the sender's incentives in different circumstances.\\n\\nIn the Bayesian Persuasion (BP) context, it is intuitive for the sender to report high-quality states to the receiver honestly. For instance, in a recommendation letter scenario, the sender (the letter writer) aims to maximize the probability that the student gets accepted. Therefore, the sender has no incentive to misrepresent a high-quality student as a low-quality one, as doing so would reduce the student\\u2019s chances of being accepted, which contradicts the sender's objective.\\n\\nThe more complex aspect of the BP problem lies in how the sender handles low-quality states. The sender\\u2019s key decision is determining the probability of describing a low-quality state as high-quality. This is because, by misrepresenting low-quality candidates, the sender may gain a net benefit. However, if the probability of lying becomes too high, the receiver may start to ignore the sender's information altogether, reducing the sender\\u2019s overall payoff.\\n\\nTo maximize their own benefit, the sender typically converges to an equilibrium where they lie with a certain probability, but not excessively, to maintain credibility with the receiver. In the case of high-quality states (as shown in chart (d)), the sender always tells the truth, as there is no incentive to misrepresent a strong candidate.\\n\\nThus, the reason the \\\"Honest\\\" probability is consistently 1.0 in the chart (d) is that, in high-quality states, the sender has no motive to lie\\u2014honesty is aligned with their goal of maximizing the outcome for the strong candidate.\\n\\nWe hope this explanation clarifies the situation depicted in chart (d) and the sender's incentives in the BP framework.\"}",
"{\"comment\": \"> Section 3.4, Table 1, and Appendix F of Zhang et al. (2022)\\n\\nTable 1 does not suggest that one of the player would take on the role of the mediator but your framework does. \\n\\nMy understanding of Zhang et al. (2022) in the context of BP is that you would construct a fictitious mediator player that plays against a team of deviator players (both sender and receiver). Reaching an equilibrium in the transformed game would then reveal an equilibrium (of a certain type) in the original BP game. \\n\\nPlease clarify why and how in your VBP framework the sender can be the mediator and still benefit from results from Zhang et al. (2022). \\n\\n> after transforming BP into a MAG, we apply the algorithm from Zhang et al. (2024)\\n\\nI still don't follow. \\n\\nZhang et al. (2024) takes a game of interests (the original BP game in your application) and provides a specific game transform that turns it into a two-player zero-sum MAG. \\n\\nWhat do you mean by \\\"...after transforming BP Into a MAG, we apply the algorithm from Zhang et al\\\"? If you did transform the BP in a specific way, then the guarantees of Zhang et al should imply convergence in the transformed BP game, not the original BP game. Why is that a reasonable approach?\"}",
"{\"comment\": \"> **Q5: Clarification on the novelty of LLM deception strategies** *\\\"It might be that the optimization performed in the paper actually discovers interesting LLM behaviors and strategies, but this is hard to tell for me. I think I can see how the paper uncovers interesting behaviors within the setting studied here, i.e. when optimizing prompts, it's interesting that some amount of lying/deceiving gets reinforced, and that this game setup works in a sense and finds something like an equilibrium. But I haven't been convinced that this specific setup is interesting enough to study on its own\\u2014it seems too artificial to me to add a lot beyond either (i) the existing toy game theory setting on one hand, or (ii) just studying persuasion directly by prompting LLMs to write lying/deceptive/persuasive etc. texts.\\\"*\\n\\n3. **Addressing the Perceived Artificiality**\\n We acknowledge your concern that the setup could feel artificial, especially compared to \\\"toy\\\" game theory settings or direct studies of LLM behavior. However, our choice of a more structured game-theoretic approach is deliberate. We aim to provide a **methodologically rigorous** way of studying persuasion and deception in LLMs that extends beyond individual case studies or anecdotal observations. By embedding the LLMs in formalized game settings, we have the tools to:\\n\\n - Ensure **repeatability** and **consistency** in the behaviors we observe.\\n - Control and **isolate variables** to study specific aspects of LLM behavior in strategic contexts.\\n - Provide **theoretical guarantees** about the strategies that emerge, such as ensuring the solution is an **equilibrium**.\\n\\n While this may introduce some level of abstraction, it gives us a stronger basis for understanding how LLMs might behave in real-world scenarios where strategic communication is critical, such as negotiations, recommendations, or advertising.\\n\\n4. **Beyond LLM Case Studies: Why Game-Theoretic Analysis Matters**\\n Studying LLMs through case studies of deception or persuasion is certainly valuable, but it lacks the **structure** and **predictive power** that a game-theoretic analysis provides. By casting the problem in a formal BP framework, we can:\\n\\n - Explore **optimal strategies** that are theoretically justified.\\n - Understand the **conditions under which deception or persuasion emerges**.\\n - Generalize findings beyond individual case studies to broader classes of strategic interaction where LLMs are involved.\\n\\n This structured approach is a **novel contribution** to the study of LLM behavior, offering insights that are harder to obtain from unstructured case studies alone.\\n\\nIn summary, while the behaviors we observe (such as deception) may not seem novel in isolation, the **framework** and **methodology** used to uncover and analyze these behaviors are the key contributions of our work. We go beyond simple prompt-based experiments to offer a **game-theoretic solution** to verbalized BP problems backed by theoretical guarantees and optimized strategies. We believe this adds significant value to the study of LLMs in strategic communication settings. Thank you again for your thoughtful comments, and we hope this clarifies the novelty and significance of our approach.\"}",
"{\"comment\": \"I remain confused by this interpretation of the MAG where one of the player in the original game can take on the role of the mediator.\\n\\n> In Definition 2.1 of Zhang et al. (2024) , they specify that n (the number of players) can equal 1.\\n\\nCould you clarify which sentence states this? Do you mean \\\"a set of players, identified with the set of integers [n] := {1, . . . , n}.\\\"? If so, that's not at all how I read it. \\n\\nMy understanding of the \\\"information advantage\\\", or \\\"power to commit\\\" in Zhang et al. (2024) is that the mediator indeed gets to know about private information about both the sender and receiver. \\n\\nConsider a game like `goofspiel`, both player may choose to reveal (or not) their hidden hand to the mediator player, who is then interested in 1) achieving an equilibrium such that no one wishes to deviate from its proposal and 2) select an equilibrium that's optimal by some metric. \\n\\nThe information advantage lies in that the mediator can receive messages from all players, effectively knowing their hidden hands. The power to commit lies in that the mediator knows that the deviators also know of the mediator's information advantage, and can therefore recommend actions to the deviators such that a deviator would find advantageous to follow. \\n\\nI'm really baffled by this alternative interpretation of a mediator being one of the player in the original game you are proposing --- at a basic level, if there's only one other player in the game, which players are the mediator mediating in between?\"}",
"{\"comment\": \"> **Q3: Clarification of Behavioral Shaping in Game Theory:** *L38-40: 'shaping the behaviours of others ... achieve this through either mechanism or information design'. I find this unclear or overly assertive. How each player's actions shape those of others is the entire focus of game theory yet this opening statement makes it sound like co-player behaviour shaping can only occur with modified rewards or observations. You would not deterministically play rock because you know I could exploit by always playing paper, would that count shaping the behavior of co-players?*\\n\\nThank you for bringing up this important point. We fully acknowledge that the original phrasing may have been overly assertive and potentially misleading. This statement does not imply that shaping player behavior in game theory can only occur through mechanisms or information design. Instead, our goal was to highlight that, in the specific context of **multi-agent reinforcement learning (MARL)** and **mixed-motive scenarios**, these two approaches\\u2014mechanism design and information design\\u2014are the predominant methods used to influence and shape behaviors.\\n\\nWe understand that the shaping of co-player behaviors is a fundamental aspect of game theory, where players' strategies naturally influence each other through their interactions. As the reviewer correctly pointed out, behavior shaping can occur in many forms, not always tied to explicit modifications of rewards or observations. For instance, players might adjust their strategies based on expectations of others' behaviors (e.g., in the classic rock-paper-scissors example), which does indeed count as shaping co-player behaviors. \\n\\nIn our specific context, we were focusing on **how MARL systems typically address strategic interactions** in mixed-motive settings. In these systems, mechanism design (modifying reward structures or the rules of the game) and information design (controlling the flow of information or signals between agents) are common tools to systematically influence agent behaviors toward desired outcomes.\\n\\n**Response to the Reviewer's Example**: \\nRegarding the reviewer's example of rock-paper-scissors, where one might not deterministically play rock just because the other player could always play paper, we completely agree that this illustrates a form of strategic behavior shaping that does not rely on modified rewards or information control. This example is indeed a core concept in game theory, where players anticipate and react to others' strategies based on their incentives and expectations. This dynamic is central to understanding equilibrium concepts like Nash equilibrium, where players' strategies naturally adapt to one another even without external interventions like mechanisms or information design.\\n\\n**Revision Plan**:\\nTo address this, we will rephrase the statement in the revised version of the paper to better reflect the broader scope of behavior shaping in game theory. Our revised statement will clarify that while **mechanism design** and **information design** are prominent tools in **MARL** and **mixed-motive game settings**, they are not the only ways to shape behaviors in general game theory. We will also explicitly acknowledge that players' strategies can shape co-player behaviors in many ways, including through natural strategic interactions, as described in the reviewer's example.\\n\\nWe appreciate the reviewer's thoughtful input on this and will ensure the revised text reflects a more accurate and nuanced view of how behavior shaping occurs in game theory.\"}",
"{\"comment\": \"> **Q2: Clarification on reducing the length of preliminaries** *\\\"In general, I would prefer there to be less preliminaries and to get to the results faster. I wonder whether one could simplify some of the discussion of preliminaries to the parts that matter for the paper, though I'm not sure.\\\"*\\n\\nWe appreciate your feedback regarding the length of the preliminaries and the suggestion to streamline this section in order to focus on the results more quickly. We understand that an extended preliminaries section can delay the reader\\u2019s engagement with the core contributions of the paper, and we have taken steps to address this concern in the revised version.\\n\\n1. **Reorganization of the Preliminary Section** \\n In the revised version of the paper, we will restructure the preliminaries to ensure that only the most essential background information is retained. Specifically:\\n - We will **merge Section 2.1 (Bayesian Persuasion)** and **Section 2.2 (Modeling BP as a Mediator-Augmented Game)** into a new, more concise **Problem Formulation** section. This will present the key concepts needed to understand the problem we are addressing without the need for excessive background details.\\n - **Section 2.4 (Classic BP Problems)** will be moved to the experimental section, where it will be introduced in the context of the experiments. This will allow us to integrate the discussion of classic Bayesian persuasion problems directly with the experimental results, streamlining the flow of the paper.\\n\\n2. **Focus on Core Contributions in Preliminaries** \\n We will revise the preliminaries to focus more narrowly on the key contributions of the paper. For example, we will retain **Section 2.3**, which introduces the **PSRO** (Policy Space Response Oracles) and the **prompt-space response oracle** framework. This framework is central to our approach and necessary for understanding the optimization of prompts in the game-theoretic setting. By concentrating on the most relevant components, we aim to reduce the length of the preliminaries while maintaining clarity.\\n\\n3. **Balancing Background and Results** \\n By reorganizing the preliminaries and moving some sections to later parts of the paper, we believe we can better balance necessary background information and the presentation of results. This adjustment will allow readers to engage with the core contributions earlier in the paper without sacrificing the necessary theoretical context.\\n\\nIn summary, we agree that the preliminaries can be streamlined and have taken concrete steps to simplify and condense this section in the revised version. We believe that this restructuring will improve the readability and flow of the paper, allowing readers to focus more quickly on the novel contributions of the work. Thank you again for your constructive suggestion.\"}",
"{\"comment\": \"Thank you for your continued engagement and detailed questions regarding our interpretation of the Mediator-Augmented Game (MAG) framework and its application to our Verbalized Bayesian Persuasion (VBP) framework. We appreciate the opportunity to clarify our approach further.\\n\\n---\\n\\n1. **Clarification Regarding Definition 2.1 in Zhang et al. (2024):**\\n\\n In Zhang et al. (2024), Definition 2.1 states: *\\\"A set of players, identified with the set of integers [n] := {1, . . . , n}.\\\"* While this definition does not explicitly state that the number of players can equal 1, it is a natural mathematical interpretation that the set of players can be empty or contain a single element (e.g., when n = 1). This flexibility is consistent with standard practices in game theory, where frameworks are often generalized to accommodate different numbers of players.\\n\\n To further verify this interpretation, we contacted the authors of Zhang et al. (2024) directly. Their response explicitly confirmed that their framework applies to Bayesian persuasion (BP) problems involving a single sender and a single receiver and, crucially, that the sender can indeed be modeled as the mediator in such settings. The authors stated:\\n\\n > \\u201cYes, the framework is applicable to BP, and indeed the sender is the mediator. I don't think there's anything more specific that needs to be done for the framework to apply to BP.\\u201d\\n\\n This direct clarification from the original authors confirms that our interpretation aligns with their framework's intended scope and applicability.\\n\\n---\\n\\n2. **Regarding the Role of the Mediator in a Single-Sender-Single-Receiver Game:**\\n\\n You raise an excellent question about how the mediator can function in a game with a single sender and a single receiver. To address this:\\n\\n - In Bayesian persuasion problems, the sender (mediator) has an information advantage and commits to a signaling scheme to influence the receiver\\u2019s actions. In this case, the mediator is not mediating between multiple players but rather between the **sender\\u2019s private information** and the **receiver\\u2019s decision-making process**. This aligns with the broader information design perspective, focusing on how the mediator\\u2019s informational advantage can be leveraged to achieve desired outcomes.\\n\\n - The sender\\u2019s role as a mediator is consistent with the examples and theoretical discussions in Zhang et al. (2022). For instance, in their discussion of Bayesian persuasion (Appendix F), they explicitly acknowledge that the mediator can take on the role of the sender (or principal) in such settings:\\n\\n > \\u201cIn Bayesian persuasion, also commonly referred to as information design, the roles of the mediator and player are reversed compared to automated mechanism design: the mediator (\\u2018principal\\u2019) has informational advantage, and the player(s) take the actions.\\u201d\\n\\n This statement directly supports our interpretation, where the sender functions as the mediator by leveraging their informational advantage to influence the receiver\\u2019s decisions.\\n\\n - Additionally, Zhang et al. (2024) discuss the mediator\\u2019s power to commit to strategies, a key element in our VBP framework. The sender-as-mediator commits to a strategy (signaling scheme) to influence the receiver\\u2019s actions, which aligns with the mediator\\u2019s role in achieving equilibrium outcomes.\\n\\n---\\n\\n3. **The Mediator\\u2019s Role in the Context of \\\"Mediating Between Players\\\":**\\n\\n While it may seem counterintuitive for the sender to act as a mediator when there are only two players, it is important to note that the mediator is not necessarily mediating **between players** in the traditional sense. Instead, the mediator facilitates the game by leveraging their informational advantage and commitment power to influence outcomes. This interpretation is supported by the original authors of Zhang et al. (2024) and the broader literature on Bayesian persuasion and information design.\", \"use_your_example_of_a_game_like_goofspiel\": \"in a BP setting, the sender (mediator) does not need to mediate between multiple deviating players. Instead, the sender\\u2019s goal is to design a signaling scheme (analogous to revealing or withholding information) to influence the receiver\\u2019s actions to maximize the sender\\u2019s utility. This dynamic remains valid even with a single sender and a single receiver, as the mediator\\u2019s role is fundamentally about shaping the information structure of the game.\\n\\n---\\n\\n4. **Conclusion:**\\n\\n In summary, our interpretation of the sender as the mediator in a single-sender-single-receiver Bayesian persuasion problem is fully consistent with the theoretical framework of Zhang et al. (2022, 2024). This has been confirmed by our reading of their work and direct communication with the original authors. We will incorporate these clarifications into the revised version of our paper to ensure that this interpretation is more explicitly addressed.\\n\\n\\nThank you again for allowing us to further elaborate on this important aspect.\"}",
"{\"comment\": \"> **Q4: Clarification on the practical relevance of the bespoke algorithms** *\\\"Given that the paper uses many bespoke algorithms to solve different aspects of the setting, I think this won't be that useful in practice. E.g., I think it's unlikely any of these will be useful for training better LLMs. If the goal is more to study propensities of current LLMs and to find out something about persuasion with LLMs, I am not sure what exactly the takeaway is. Is it e.g. 'LLMs can implement complex strategies of deception/lying/etc.'? If so, then I think this is not novel and also doesn't require the complexity used in the paper. I might be missing something here and am curious what the authors think.\\\"*\\n\\nThank you for your insightful questions regarding the practical relevance of the algorithms we propose in this paper. We understand your concerns about whether these algorithms will be useful in practice, especially in comparison to directly training large language models (LLMs) for persuasion tasks. Below, we aim to clarify the motivations behind our approach and how it provides practical benefits over methods that rely solely on in-weight updates or direct LLM training.\\n\\n1. **Training Stronger LLMs vs. Lightweight Optimization**\\n As you pointed out, training a more powerful LLM to handle persuasion tasks is possible. In fact, several existing works have already demonstrated that state-of-the-art (SOTA) models exhibit some level of persuasion capabilities through case studies. However, training LLMs via **in-weight updates** is extremely costly in terms of both **time and resources**. Furthermore, this approach lacks theoretical guarantees such as **convergence** or **optimality**, making it difficult to analyze or explain the model's behavior in a structured way.\\n\\n In contrast, our approach\\u2014**Verbalized Bayesian Persuasion (VBP)**\\u2014offers a **lightweight alternative** that avoids the need for expensive retraining. By focusing on **in-context updates** through prompt optimization, we achieve a method that is more practical to deploy and analyze in real-world settings. This approach allows us to extract more persuasive capabilities from models that may not have been explicitly trained for such tasks, without requiring the extensive computational resources that in-weight updates demand.\\n\\n2. **Theoretical Benefits of VBP**\\n One of the key advantages of VBP over direct LLM training is its **stronger theoretical foundations**. By combining game-theoretic principles with prompt optimization, we provide a framework that allows for a more rigorous analysis of the solutions generated by the LLMs. For instance, the VBP framework allows us to reason about the **optimality** of the strategies produced. It ensures that the system converges to a solution that aligns with the objectives of the Bayesian persuasion game. These theoretical properties are difficult to guarantee when using purely data-driven approaches for training LLMs.\\n\\n3. **Practical Relevance of Prompt Optimization**\\n From a practical standpoint, prompt optimization (as used in VBP) offers a more scalable solution for real-world applications, especially those involving **advertising** or **conversational agents**. Many of these applications are moving toward **edge deployment**, where models must operate efficiently on local devices with limited computational resources. In such cases, **prompt-based methods** are far more feasible than retraining large models. VBP provides a framework that can be deployed in these environments, offering a practical solution for implementing persuasive strategies without the overhead of training entirely new models.\\n\\n4. **Enhancing Persuasion with Weaker Models**\\n Another key objective of VBP is to enhance the persuasive capabilities of models that may not have been explicitly trained for persuasion. By combining **game-theoretic methods** and **in-context learning**, we can extract more sophisticated persuasion strategies from models that might otherwise exhibit only rudimentary abilities in this area. This offers a way to augment the performance of less capable models, making VBP a valuable tool for improving persuasion in a wide range of LLMs without the need for high-resource, bespoke model training.\\n\\nIn summary, while it is possible to train stronger models to handle persuasion, our approach with VBP offers a **more lightweight, practical, and theoretically grounded solution**. By leveraging prompt optimization and game-theoretic principles, VBP can be deployed efficiently in real-world applications, especially in resource-constrained environments, while also providing a framework for deeper theoretical analysis. We hope this clarifies the motivation and practical relevance of the algorithms we've proposed. Thank you again for your valuable feedback.\"}",
"{\"comment\": \"> **Q8: Request for prompts and transcripts in the main paper** *\\\"It would be nice to have some (possibly abbreviated/stylized) prompts and transcripts in the main body of the paper.\\\"*\\n\\nThank you for your suggestion regarding the inclusion of prompts and transcripts in the main body of the paper. We understand that providing more concrete examples of the prompts and signals would help clarify the mechanics of our approach and make it easier for readers to understand the nuances of the LLM behaviors.\\n\\nIn response to your feedback, we will incorporate the following changes in the **revised version** of the paper:\\n\\n1. We will **summarize key prompts** from **Appendix E.3** and include them in the main body. These prompts are crucial for demonstrating how the LLMs are guided within the game-theoretic framework.\\n2. Additionally, we will integrate content from **Appendix F.5 (generated signals)** and **Appendix F.6 (generated prompt functions)** into the main paper. These sections provide detailed examples of the signals produced by the LLMs and how prompt functions are optimized during the process.\\n\\nBy summarizing and presenting these examples in the main body, we aim to give readers a clearer view of the actual interactions taking place during the experiments and the optimization process. We believe this will enhance the overall readability and accessibility of the paper.\\n\\nThank you again for your valuable suggestion, and we are confident that these additions will improve the clarity of the presentation.\\n\\n---\\n\\n> **Q9: Clarification on the commitment assumption with non-specific prompts** *\\\"If the prompt doesn't specify exactly how and when to lie, how can this still guarantee the commitment assumption?\\\"*\\n\\nThank you for your thoughtful question regarding the commitment assumption in the context of non-specific prompts. We understand your concern about how the commitment assumption holds if the prompt does not explicitly specify how and when the sender (LLM) might lie.\\n\\nTo clarify, even in the **classic Bayesian persuasion (BP) problem**, the **receiver** does not know exactly **how or when** the sender might lie. The receiver only knows the **probability** that the sender is lying based on the sender's overall strategy. The receiver makes decisions with this probabilistic understanding in mind rather than requiring specific details about individual instances of deception.\\n\\nOur **VBP** framework is aligned with this classic BP setup. The prompts we use do not need to specify the exact form of deception for the model to adhere to the commitment assumption. Instead, the sender (LLM) is committed to a **probabilistic strategy** that the receiver understands in aggregate, even if the specific actions or lies are not fully determined in advance.\\n\\nTherefore, the **commitment assumption** in our framework is upheld in the same way it is in classic BP: the sender is committed to a strategy that the receiver interprets probabilistically, ensuring that the game dynamics and decision-making processes remain consistent with the theoretical foundations of Bayesian persuasion.\\n\\nWe hope this clarifies how the commitment assumption is preserved in our framework, even with non-specific prompts. Thank you again for your insightful question.\"}"
]
} |
E5ulvtj86q | Spatial-aware decision-making with ring attractors in Reinforcement Learning systems | [
"Marcos Negre Saura",
"Richard Allmendinger",
"Theodore Papamarkou",
"Wei Pan"
] | This paper explores the integration of ring attractors, a mathematical model inspired by neural circuit dynamics, into the reinforcement learning (RL) action selection process. Ring attractors, as specialized brain-inspired structures that encode spatial information and uncertainty, offer a biologically plausible mechanism to improve learning speed and predictive performance. They do so by explicitly encoding the action space, facilitating the organization of neural activity, and enabling the distribution of spatial representations across the neural network in the context of deep RL. The application of ring attractors in the RL action selection process involves mapping actions to specific locations on the ring and decoding the selected action based on neural activity. We investigate the application of ring attractors by both building them as exogenous models and integrating them as part of a Deep Learning policy algorithm. Our results show a significant improvement in state-of-the-art models for the Atari 100k benchmark. Notably, our integrated approach improves the performance of state-of-the-art models by half, representing a 53% increase over selected baselines. | [
"Reinforcement Learning",
"Computational Neuroscience",
"Deep Learning",
"Ring Attractors",
"Spatial Awareness",
"Bioinspired"
] | Reject | https://openreview.net/pdf?id=E5ulvtj86q | https://openreview.net/forum?id=E5ulvtj86q | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zU2B1mZ8eA",
"z1HG7ez3dm",
"upkIGTqE8j",
"q7qcrIjEBy",
"nbLGJpxgeT",
"lOYBlXss43",
"gnmltziTwM",
"cnJH7lGhzn",
"aGxsRGzPfo",
"XOIqZz4Eg8",
"WibeqFj0sM",
"Qvb3dZnGx1",
"QRaBN0cwlp",
"Q6r9FKC39M",
"M9mMMAzrJF",
"LWH3Yh93hC",
"HoVUmz7pwi",
"DdEKCAqC6C",
"9YlHpYsxB7",
"9KNIfIrgSi",
"9IbgKzz0zp",
"6Ugv8nzaEW",
"4eAEDmTroM",
"3pHtzASVvq",
"2qapMZljAv",
"1lM45lAsdE",
"19185b6wz3"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment"
],
"note_created": [
1733270391230,
1730696402764,
1732816610107,
1732314251893,
1732362864515,
1730717882115,
1732329153894,
1732587641302,
1732588388384,
1732587895429,
1737524082167,
1732460765284,
1732317382355,
1730652956285,
1732481596427,
1730693705088,
1732651161496,
1732329364657,
1732771192874,
1732325962396,
1732824593169,
1732650742943,
1732805189948,
1733222013403,
1732521958874,
1734750790320,
1732650543272
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Reviewer_xjv3"
],
[
"ICLR.cc/2025/Conference/Submission10862/Reviewer_96JC"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Reviewer_6Rpb"
],
[
"ICLR.cc/2025/Conference/Submission10862/Reviewer_xjv3"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Reviewer_3UD9"
],
[
"ICLR.cc/2025/Conference/Submission10862/Reviewer_96JC"
],
[
"ICLR.cc/2025/Conference/Submission10862/Reviewer_96JC"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Reviewer_3UD9"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10862/Reviewer_6Rpb"
],
[
"ICLR.cc/2025/Conference/Submission10862/Reviewer_xjv3"
],
[
"ICLR.cc/2025/Conference/Submission10862/Area_Chair_D925"
],
[
"ICLR.cc/2025/Conference/Submission10862/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"We appreciate the reviewer's insightful comments and would like to provide further insights:\\n\\n**1. Role of the ring attractor in uncertainty quantification (UQ):**\\nThe ring attractor primarily serves as a spatial encoding mechanism, while uncertainty quantification happens through the BDQN framework. The RA architecture provides a structured topology for organizing actions and their uncertainties, but does not directly participate in computing those uncertainties. The variance estimates from BDQN's Bayesian calculations feed into the Gaussian input signals to the ring attractor (via \\u03c3\\u2090 in eq. 1), allowing uncertainty to influence the activation patterns, and influencing the action selection process. The correct distribution and injection of uncertainty measurements into the ring improves action selection as presented in Figure 2 in Section 4.1 of the experiments and supported further by the ablation study in Appendix A.2.1. Ring attractors enhance BDQN's performance by encoding actions in a circular topology while incorporating uncertainty through Gaussian variance parameters \\u03c3\\u2090, mathematically expressed through input signal equations $x_n(Q) = \\\\sum_{a=1}^{A} {\\\\frac{Q(s,a)}{\\\\sqrt{2\\\\pi \\\\sigma_a}} \\\\exp\\\\left(-\\\\frac{1}{2}\\\\frac{(\\\\alpha_n - \\\\alpha_a)^2}{\\\\sigma_a^2}\\\\right)}$. This structure of the action space provides benefits beyond BDQN's Bayesian estimation by using explicit spatial encoding in the behavior policy plus uncertainty quantification expressed as variance of the action signals. As demonstrated by [Sun et al. (2018)](https://www.researchgate.net/publication/326227083_An_Analysis_of_a_Ring_Attractor_Model_for_Cue_Integration), ring attractors provide a robust framework for combining cues according to their certainty, using this to our advantage during action selection.\\n\\n**2. What is the nature of the spatial relationship encoded by RA?** The spatial organisation provided by ring attractors fundamentally enhances action selection by explicitly encoding the action space and enabling the distribution of spatial representations across the neural network. The correlation between actions is solved by a trainable hidden space as presented in the appendix, Sections A.2.3, A.4; and methodology, Section 3.2.1. As detailed in Section 4.2, games with primarily directional movement like Asterix utilise a single-ring configuration for eight directional movements, while games combining movement with independent actions like Seaquest employ a double-ring configuration; one for movement and another for secondary mechanics. The biological plausibility of this approach can be traced back to the studies presented in the appendix, Section A.1, where ring attractors provide spatial cue integration that we employ in the context of action selection.\\n\\n**3. I appreciate the newly added Appendices, though am still baffled at some implementation details.**\\nThe learnable time constant \\u03c4 in equation 13 controls the rate of information integration, since the weights are fixed to preserve the ring topology this learnable parameter enables that rate of transfer from previous layers in the DL agents can vary depending on the preference of the DL agent. As detailed in Section 3.2.1, this allows the network to balance spatial relationships with task-specific learning while preserving ring attractor dynamics through the fixed distance-dependent connection weights.\\nThe selection of \\u03c3\\u2090 = \\u03c0/6 for BDQNRA was empirically determined to provide optimal balance between action discrimination and smooth transitions in the tested environments. As shown in Section 4.1, this value enables smooth action transitions while preventing interference with opposing actions. This middle step (BDQRA) in the integration between BDQN and a full integration with uncertainty with BDWNRA-UA was put in place to make easier the distinction on how each of the components spatial representation and uncertainty was actually providing an improvement compared to baseline.\\nEquation 19 in Appendix A.4.1 is computed in a single forward pass during inference, despite Fig. 8 illustrating the temporal evolution of hidden states for visualization purposes. The complete forward pass combines input signals and hidden state information in one step through the matrix operations defined in the equation.\\n\\n**4. To help clarify some of the questions above, would the authors be open to share to their code?** We acknowledge the importance of sharing our codebase for future research and are actively working on preparing it for release. While we were initially planning to share this at a later stage, we understand the value of making it available sooner. We will have the implementation details ready by the end of this week, including baselines for BDQN, DDQN, and Efficient Zero. Although we weren't able to prepare the codebase within one day of the request, it will be available at [this repository](https://github.com/marcosaura/RA_RL).\"}",
"{\"summary\": \"This paper proposed a method for using a ring attractor mode as the final layer of a model-free reinforcement learning model on an agent with discrete actions for tasks in a 2D space. Given an observation and all actions, with the ring attractor, the Q value estimation can take advantage of prior knowledge of space sense for a better action decision. The experiments report that three models with this method outperformed the original baseline models in 2D video games.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This work contributes to an important concept in decision-making during reinforcement learning in a task with space: how to associate values with directions. This paper presents a method to do it with a ring attractor which plays a role in Q value fusion with a prior of space.\", \"weaknesses\": \"To my understanding, this method implicitly assumed that each action is associated with a direction, the Q value of different actions is a Gaussian distribution in different directions, and Q values in different directions can be summed up. These assumptions, which are not always true, were not discussed clearly in the paper. For example, given a task, not all actions are associated with a direction. In this case, optimising the ring attractor with gradient decent could lead to the degeneracy of dynamics. This paper should show if the ring attractor still works as expected after training.\\n\\nThe paper also did not present ring attractor models correctly, which contains fundamental errors. In section 3.1, the title claims that the ring attractor model being presented is a spiking neural network, however, the presented model is a firing-rate model without spikes. The equations describing the model look like an inappropriate combination of two different models because the definitions are not consistent. For example, $i_n$ in equation (4) and (5) are different. There are many other mathematical errors in the paper, although some of them are just typos, but impact the quality and soundness of the paper.\\n\\nIn the experiments, three different models that were not clearly reviewed or mentioned are presented. The name abbreviation of the models appears abruptly, thus a reader has to guess what they are, even the models this paper tried to propose are in this case. For example, what is BDQNRA, BDQNRA-UA, and EffZeroRA? And how they come from? Although the basic method to combine a network with the ring attractor is presented, the specific models as a result of the method should be introduced before presenting the results.\", \"questions\": \"1. Line 30. The ring attractor model was proposed much earlier than 2017, the original paper should be cited.\\n2. Line 128, 131, 169, 416. None of the models presented in this paper are spiking.\\n3. The content in Section 3.1.1 is not the contribution of the work. It should be in the related work or background.\\n4. In Lines 159 and 160, $i$ was referred to as an input signal, however, it is used as an index in later paragraphs. They are very different concepts.\\n5. Equation (2)(3). A mixture of differential and difference equations, especially \\\"$u_{+\\\\Delta t}$\\\" is a confusing term.\\n6. Line 178, should $\\\\Delta t$ be a constant in the context of ODE?\\n7. Equation (2)(3). What is $i_n$ given that there is only one inhibitory neuron? The equation in Line 193 about the weights from the inhibitory neuron to excitatory neurons is confusing.\\n8. Equation (4)(5) results different definitions of $i_n$.\\n9. Equation (6). Define $w^{I \\\\rightarrow I}$.\\n10. Equation(7). Define $\\\\alpha$ when it is not a function.\\n11. Equation(8). Wrong parentheses.\\n12. Line 249. The equation in this line contains a standalone parenthesis.\\n13. Equation(9). Why there is a normalisation?\\n14. Line 266 introduced $\\\\Phi$ as an algorithm, but used as a function in later equations. What is the output of the function?\\n15. Line 264, 265 and 268, why $Q(s,a)=\\\\Phi_\\\\theta(s)=\\\\theta ^ T x(s)$? Prove it or cite a reference.\\n16. Line 286 and 295, does $w_a$ represent the weights or the outputs of the upstream model?\\n17. Equation (11). What are $y$, $x'$ and $\\\\Phi_{\\\\theta_{target}}$?\\n18. Equation (13). Does the equation assume there are more than one inhibitory neurons? It is different from the early sections. $abs$ is in Italic thus not a good format to be a function. $d(m,n)$ defined twice and differently here. Why does the second definition of $d(m,n)$ contain a term $(m-m$?\\n19. Table 1. What does the double ring mean? It is not explained mathematically.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"The work does not raise any ethical concerns for me.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Can you point me to where the additional material / changes relating to the baseline \\\"incorporating the requested baseline comparisons for the 1-D circular variable action space\\\" were made? I did not see it while skimming the text highlighted in orange in the revised manuscript.\"}",
"{\"comment\": \"Thank you for this excellent feedback, it has given us meaningful ways to strengthen our work:\\n\\n**1 Novelty**\\n\\nProblem Statement\\n\\nThe problem statement lies in conveying faster learning for decision-making algorithms, particularly within the RL framework. While ring attractors have been well-studied in neural models of decision-making, their theoretical integration into fundamental decision-making frameworks beyond neuroscience remains limited. We acknowledge that ring attractor models have been studied in theoretical work. However, our paper's novelty lies in several key contributions:\\n\\nNovel Integration with RL/DL\\n\\nWe are the first to propose using ring attractors as a mechanism for action selection in RL, creating a bridge between neuroscience-inspired models and RL/DL architectures to provide spatial-aware decision-making that increases the learning rate of modern algorithms. To apply this concept to more mainstream approaches, we create a novel implementation of ring attractors using RNNs (Section 3.2), compatible with DL frameworks, while preserving their key properties the Section 3.2.\\n\\nOur approach introduces explicit spatial encoding of actions, which is fundamentally different from existing methods. As demonstrated in Section 4.3, this leads to performance improvements, particularly in spatially-structured tasks (e.g., 110% improvement in Asterix, 105% in Boxing).\\n\\nTheoretical Insights\\n\\nEquations 5,6 describe the behaviour of our policy. We prove through ablation studies, Figure 4 and Figure 5 that the dynamics in the ring and correct layout of the action space as input signals is the only way that yields actual learning improvements.\\n\\n**2 Insight on why the model excels at action sampling**\\n\\nRing attractors enhance BDQN's performance by encoding actions in a circular topology while incorporating uncertainty through Gaussian variance parameters \\u03c3\\u2090, mathematically expressed through input signal equations $x_n(Q) = \\\\sum_{a=1}^{A} {\\\\frac{Q(s,a)}{\\\\sqrt{2\\\\pi \\\\sigma_a}} \\\\exp\\\\left(-\\\\frac{1}{2}\\\\frac{(\\\\alpha_n - \\\\alpha_a)^2}{\\\\sigma_a^2}\\\\right)}$. This structure of the action space provides benefits beyond BDQN's Bayesian estimation by using explicit spatial encoding in the behaviuor policy plus uncertainty quantification expressed as variance of the action signals. As demonstrated by [Sun et al. (2018)](https://www.researchgate.net/publication/326227083_An_Analysis_of_a_Ring_Attractor_Model_for_Cue_Integration), ring attractors provide a robust framework for combining cues according to their certainty, using this to our advantage during action selection.\\n\\n\\n**3 Unclear performance boost between EffZeroRA and uncertainty estimation**\\n\\nAtari implementation uses our DL-based ring attractor model without uncertainty estimation. Uncertainty is only used in the CTRNN exogenous model with BDQN's BLR layer.\\n\\n**4 Compatibility of ring structure to arbitrary action spaces**\\n\\nAs described in Section 3.1.2, actions are mapped to specific locations on the ring based on their spatial relationships, providing significant flexibility that enables generalisation. Our approach has demonstrated robust performance across diverse action spaces in the Atari 100k benchmark. Single-ring configurations excel in directional movement games (Asterix, Ms Pacman), while double-ring configurations effectively handle games combining movement with independent action dimensions (Seaquest, BattleZone). Additionally, ablation studies in the appendix indicate minimal performance degradation, compared to baseline, when actions are misplaced in a ring layout.\\n\\n**5 Implementation details**\\n\\n1. This is n clarified in section 3.2.1 (line 349)\\n2. Neural Ring Dynamics (Section 3.2.1): The learnable time constant \\u03c4 controls information integration into the ring attractor. The forward pass uses fixed distance-dependent weights to maintain spatial relationships, while the hidden state dynamics U(v)m,n employs trainable weights to learn complex action dependencies. By adjusting \\u03c4, we can regulate input signals to the RNN layer, balancing spatial relationships with task-specific learning while preserving ring attractor dynamics. We are working on a new appendix section demonstrates the emergence of sustained ring patterns in our experiments.\\n3. It does not, all DL-based implementations are implemented as outlined in **3 Unclear performance boost between EffZeroRA and uncertainty estimation**.\\n4. SNN does not play any role in this research, we apologise for the confusion. We've corrected the methodology to address this error. We use continuous-time recurrent neural networks (CTRNN) as the initial framework for the exogenous model.\\n\\n**Q1**\", \"this_has_been_developed_in_the_section_above\": \"**5 Implementation details**.\"}",
"{\"comment\": \"We would like to thank the reviewer for the thoughtful review, particularly regarding CANN vs RNN intrinsic dynamics:\\n\\n**RNN vs CANN (CTRNN) intrinsic dynamics**\\n\\nWe acknowledge the reviewer's important question about visualising the dynamics of our RNN-based ring attractor implementation. The fundamental challenge stems from the discrete, synchronous update nature of standard DL frameworks, making it difficult to replicate the continuous-time dynamics of biological ring attractors. While our RNN implementation incorporates key architectural elements - a learnable time constant \\u03c4 and distance-based weight matrices V(s) and U(v) that capture spatial organisation (Eq. 13); it operates fundamentally in discrete time steps during both training and inference.\\nTo address this limitation and provide deeper insights into our model's dynamics, we are developing and will add a new appendix section. This section will present analyses of network behaviour through visualisation of: (1) emergence and stability of ring-shaped connection in the forward pass when fix weights constraints are removed for V(s), (2) temporal evolution of hidden states h(v) during action selection, (3) neuron activation patterns under varying input conditions, (4) stability analysis of attractor states by perturbing initial conditions, and (5) empirical investigation of how the learned time constant \\u03c4 modulates information flow between input signals and hidden states.\\nThese visualisations, while constrained by the discrete nature of DL frameworks, will provide concrete evidence for how our RNN implementation preserves essential computational properties of ring attractors. The analysis will complement our performance results from Section 4.3, offering insights into why our approach achieves significant improvements across diverse environments in the Atari 100k benchmark. This addition should also highlight promising directions for future research in approximating continuous-time dynamics within DL architectures.\\n\\n**Equation 9 normalisation and discrete action spaces**\\n\\nThe action selection mechanism in Equation (9) presents an initial formulation of the mapping between ring attractor dynamics and reinforcement learning action selection. This basic mapping operates under two fundamental assumptions about the action space: discreteness and uniform distribution across the ring attractor's circumference. The normalization term N(A)/N(E) enables the transformation from neural activity to action selection by mapping from the higher-dimensional neural representation (N(E) excitatory neurons) to the lower-dimensional discrete action space (N(A) actions). Given the uniform distribution assumption, each action corresponds to a contiguous arc of neurons in the ring, with the scaling factor ensuring that the maximally activated region of neural activity (identified by argmax_n{V}) maps to the appropriate discrete action index. This formulation maintains the ring attractor's spatial encoding properties while accommodating the discrete nature of the action space. A comprehensive treatment of various integration approaches, including single and double ring configurations and continuous action spaces, along with specific environment implementations, will be added in a forthcoming appendix section that is currently work in progress.\"}",
"{\"summary\": \"The authors propose adapting the ring attractor network, a long-studied, biologically inspired neural network architecture, as a stochastic action selection module for reinforcement learning. The authors demonstrate mathematically how this model can integrate input uncertainty and convert it to probabilistic attractor states, which could be used to control action choices. The authors further embed the ring network model with existing deep reinforcement learning models, including BDQN, DDQN and EfficientZero and show significant boost in performance on the Atari game suite.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The ring attractor model is a prevalent and important biological architecture with demonstrated fucntion in sensory integration and decision-making. Testing its utility as part of a deep reinforcement learning module is an interesting and promising direction. The mathematical formulation of the model is well presented. An extensive list of empirical evaluation has been done showing significant boost of performance from the EfficientZero baseline.\", \"weaknesses\": \"1)\\tNovelty: The ring attractor model itself has been proposed decades ago and many theoretical work has examined how this model architecture integrate information to output perceptual or action decisions. The ability of the ring attractor network to integrate or compute uncertainty has also been analyzed theoretically (e.g. Kutschireiter et al., PNAS 2023). So the novelty of the work does not lie on the theoretical analysis on the ring attractor model, but on its application as a action-selection module in RL. While the empirical evaluation appears promising, there is no strong theoretical insight on why this model excels over existing algorithms for action selection (see more details below).\\n2)\\tInsight on why the model excels at action sampling: The authors claim that the ring attractor-based action selection introduces uncertainty awareness (UA) into action selection. However, the BDQN algorithm, which served as the baseline for BDQN-UA, already performs uncertainty estimation and action selection through approximate Thompson-sampling. While the ring attractor model may bring additional level of stochasticity into action sampling, it is unclear what the key mechanism that brings about the boost in empirical performance. In fact, as shown in Kutschireiter et al. PNAS 2023, computation in the ring attractor network essentially implements Bayesian inference. Since Bayesian inference is already done through BLR in the BDQN model, the goal performing another round of inference through the ring network appears unnecessary. \\n3)\\tWhile EffZeroRA should remarkable performance on the Atari suite, it is unclear whether the performance boost arise due to uncertainty estimation through the BLR on action values, or through the deployment of the ring attractor. Perhaps an ablation study comparing EffZero with BLR/Thompson sampling against EffZeroRA could help isolate the role of the ring attractor\\n4)\\tCompatibility of ring structure to arbitrary action spaces: A feature in the dynamics of the ring attractor model is that nearby units tend to have correlated activity (due to strong excitatory connections). When used for action selection, this structure feature could introduce correlation in the sampling probability of actions represented by close-by units on the ring. However, this correlation in sampling probability may not be desirable for all RL tasks, hence hindering the generality of the ring attractor as an action selection module.\\n5)\\tImplementation details--could the authors comment on: 1. Are the ring weights fixed or updated during training? 2. If updated, how is the appropriate decay of excitation weights maintained? 3. Does DDQN-RA use BLR for uncertainty estimation like BDQN-RA? 4.What specific role does neuronal spiking play in the exogenous model, and is it necessary?\", \"questions\": \"Could the authors comment on:\\n1) mechanistic insights on why the ring attractor model adds a performance boost on top of BDQN, which already performs Bayesian uncertainty estimation and posterior sampling of actions?\\n2) whether the sampling implicit bias in the ring model (i.e. sampling probability of \\u201cclose-by\\u201d actions may be correlated due to local excitation among ring units) could impact its generality in RL applications?\\n3) provide more implimentation details as listed in point 5 above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"The reply addressed some of my questions, but several points still require further clarification, and some others are acknowledged but not answered yet.\\n\\n1. How does the RNN implementation of CANN differ fundamentally from the original model? Does the RNN version exhibit intrinsic dynamics? If intrinsic dynamics are present, it would be beneficial to include a figure illustrating these dynamics, such as curves of neuron activities under varying inputs.\\n\\n2. Could the authors provide an explicit explanation of why normalisation is applied in Equation (9) in the comment? \\n\\nPlease reply to the remaining questions if the authors make progress in the revision.\"}",
"{\"title\": \"Manuscript update\", \"comment\": \"We are grateful to the reviewers for their insightful feedback which has helped us develop a more comprehensive manuscript. We believe these additions strengthen the paper's potential to spearhead new research bridging neuroscience-inspired models with standard DL approaches.\", \"we_are_providing_an_updated_manuscript_that_expands_and_clarifies_several_key_sections\": \"\\u2013 4.1 EXOGENOUS RING ATTRACTOR MODEL PERFORMANCE ANALYSIS: We expand our analysis of the exogenous ring attractor model to include continuous action spaces, demonstrating its efficacy in mapping ring attractor outputs to a continuous 1D circular action space using the OpenAI Highway environment. \\n\\n\\u2013 A.2.3 DEEP LEARNING RING ATTRACTOR MODEL EVOLUTION: We introduce an examination of the ring attractor dynamics in our DL implementation. We analyse the evolution of ring-shaped patterns during training, revealing how the network naturally preserves spatial relationships while adapting to task-specific requirements. \\n\\n\\u2013 A.4 DEEP LEARNING RING ATTRACTOR MODEL IMPLEMENTATION DETAILS: We present implementation details covering both single and double-ring configurations. We provide the mathematical framework underlying different implementations and outline pathways for extending the approach to more complex configurations.\\n\\n\\u2013 A.5 MODELS AND ENVIRONMENTS IMPLEMENTATION: We offer systematic documentation of our experimental implementations, detailing the specific properties of each environment-model pairing. This addition includes configuration details and clarifies the particular requirements of different environments, to improve reproducibility and facilitating future applications of this approach.\"}",
"{\"comment\": \"We appreciate the reviewer's insightful comments and would like to provide additional clarification.\", \"q1\": \"We completely agree with the reviewer's perspective on ring attractors. To clarify our position: while the ring attractor is indeed key for representing internal heading direction as a cognitive state variable, it is not working in isolation in this research as RL agent. Rather, it functions as part of an integrated system where the RL agent combines the ring attractor's output with other computational elements, provided by the baseline models to generate appropriate actions. We acknowledge that the ring attractor itself doesn't directly perform action selection in isolation, but rather contributes providing spatial encoding that, when combined with other components of the RL agent, enables more effective overall performance.\", \"q2\": \"We have submitted a revised version of our work that addresses several points, particularly incorporating the requested baseline comparisons for the 1-D circular variable action space. We invite the reviewer to examine these new additions, which we believe may better demonstrate the system's capabilities.\"}",
"{\"comment\": \"We appreciate the reviewer taking the time to look at the appendix and pointing out inconsistencies, we are looking at the appendix equations and working on them to avoid repetition and improve clarity.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"We would like to express our gratitude again to the reviewer. We have addressed all their comments, which have improved the quality of our work:\\n\\n**Q1**: We acknowledged Zhang (1996), who first proposed ring attractors as a theoretical model for head direction cells, in the background placed in the appendix. As it is a key milestone we've moved the citation to line 30 in the text.\\n\\n**Q2**: Apologies for the confusion with the nature of the ring attractor model. We've corrected the methodology to address this error.\\n\\n**Q3**: Though these equations are well-known, their placement in the methodology is useful as they form the working components of our RL policy behaviour policy implementation. It may disconnect the theoretical foundation from our actual algorithmic steps, making the methodology harder to follow and reproduce.\\n\\n**Q4**: We agree with the reviewer that $i$ refers to index, being $x$ the input signals to the ring attractor. We have clarified this (lines 155,161). Additionally we have changed notation for excitatory and inhibitory connections from $e, i$ to $\\\\epsilon, \\\\eta$ to address the same potential issue.\\n\\n**Q5**: To avoid confusion, we keep the ordinary notation and drop the middle fraction term. We aim to convey through equation 2,3 that we approximate a differential equations with a discrete difference equation.\\n\\n**Q6**: Yes, you are absolutely right. It is a constant in the context in ODE, states in line 179. We could drop the derivative equation but it provides context of the approximation we are performing. $\\\\tau$ as constant does not depend anymore in time, hence the approximation we are showing in eq 3.\\n\\n**Q7**: The reviewer is absolutely correct, elaborating on weighted connection $i_n$ (named now to avoid confusion with indexes $\\\\eta_n$) the suffix $n$ alludes to excitatory neuron $n$ it connects. We've addressed equation in 193 to explicitly show the result when only one inhibitory connection is placed in the ring.\\n\\n**Q8**: This has been fixed in equations 4,5, thank you.\\n\\n**Q9**: $w^{I \\\\rightarrow I}$ was not strictly needed as it is a recurrent connection with distance 0 for the inhibitory neuron results in redundant weight with value 1 ($e^0$). We've removed the term in equation 6.\\n\\n**Q10**: Fixed, thank you. We have defined $\\\\alpha_n$ (line 230) and unified the naming convention for $\\\\alpha_a(a)$.\\n\\n**Q11**: Fixed, equations 5 and 6 wrongly displayed the current excitatory and inhibitory terms $v_n,u$ over the time integration constant $\\\\tau$, which is incorrect. We have moved them outside of the equation parenthesis.\\n\\n**Q12**: Fixed, thank you.\\n\\n**Q13**: The normalization term N(A)/N(E) enables the transformation from neural activity to action selection by mapping from the higher-dimensional neural representation (N(E) excitatory neurons) to the lower-dimensional discrete action space (N(A) actions). Given the uniform distribution assumption, each action corresponds to a contiguous arc of neurons in the ring, with the scaling factor ensuring that the maximally activated region of neural activity (identified by argmax_n{V}) maps to the appropriate discrete action index. This formulation maintains the ring attractor's spatial encoding properties while accommodating the discrete nature of the action space. A comprehensive treatment of various integration approaches, including single and double ring configurations and continuous action spaces, along with specific environment implementations, will be added in a forthcoming appendix section that is currently work in progress.\\n\\n**Q14**: We agree with the reviewer. To clarify notation, the terminology here $\\\\Phi_\\\\theta(s)$ represents both a function approximation algorithm and a function, common in ML [LeCun et al., 1998](http://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf). It represents the algorithmic process of approximating Q-values through neural network weights ($\\\\theta$) and feature extraction, while mathematically functioning as a mapping from states to Q-values through $\\\\theta^T x(s)$. The output of the function/ function approximation algorithm are the Q-values for a state-action pairs.\\n\\n**Q15**: Fixed, added reference (line 268).\\n\\n**Q16**: They represent the weights of the upstream Bayesian Linear Regression model (line 287, 304).\\n\\n**Q17**: Fixed, developed further (lines 300-303).\\n\\n**Q18**: Fixed (equation13). In the DL RNN implementation, no inhibitory neurons are present. Instead, standard DL neurons with tanh activation maintain the attractor state through their hidden state dynamics, rather than through biological-like lateral inhibition. \\n\\n**Q19**: The action space is split into two separate rings with weak connections between them, this implementation is not explicit in the original paper. We are also working to provide an appendix section that provides insights on integration for the different models and environments, including single and double ring implementations.\"}",
"{\"comment\": \"We sincerely thank the reviewer for their insightful feedback, which has particularly strengthened our methodology:\\n\\n**1 Actions space assumptions**\\n\\nWhile our model does map actions to ring positions, as described in Section 3.1.2, it's not limited to purely directional actions. The paper demonstrates this through double ring configurations in Section 4.3, where games like Seaquest and BattleZone effectively use separate rings for movement and independent action dimensions. This architecture directly addresses cases where not all actions have inherent directional relationships.\\nRegarding the potential for degeneracy in cases where actions lack directional mapping, Section 3.2 of our paper details how the DL implementation specifically handles this concern. The architecture employs a fixed forward path (V(s)) that maintains structural relationships through distance-dependent weights, while also incorporating trainable hidden state dynamics (U(v)) that can adapt to non-spatial action relationships. \\nThe empirical validation in Table 1 demonstrates that our approach works effectively across different action space configurations - from single ring games like Asterix (showing 110% improvement) to double ring games like Seaquest (32%). These results suggest that the ring attractor dynamics remain stable and effective after training, even when handling diverse action types. Additionally, ablation studies in the appendix indicate minimal performance degradation, compared to baseline, when actions are misplaced in a ring layout.\\n\\n**2 SNN and mathematical presentation**\\n\\nWe apologise for the inconsistencies in the methodology, and we are working towards fixing all the issues that have been very well pointed out in this review. You are correct that Section 3.1 incorrectly describes a spiking neural network(SNN). We've corrected the methodology to address this error. We use continuous-time recurrent neural networks (CTRNN) as the initial framework for the exogenous model. Regarding the inconsistency between equations (4) and (5), we are working on reconciling all mathematical definitions and ensuring consistency across equations.\\n\\n**3 Experiment models presentation**\\n\\nWe have added a brief clarification in line 380. Additionally, we will include integration specifications and implementation details for each model in the appendix. This revision should help readers better follow the relationship between our methodology and the specific model implementations discussed in the experimental results.\\n\\n**Q1:** We acknowledged [Zhang (1996)](https://www.jneurosci.org/content/16/6/2112), who first proposed ring attractors as a theoretical model for head direction cells, in the background placed in the appendix. As it is a key milestone we\\u2019ve moved the citation to line 30 in the text.\\n\\n**Q2:** Apologies for the confusion with the nature of the ring attractor model. We've corrected the methodology to address this error. \\n\\n**Q3:** Though these equations are well-known, their placement in the methodology is useful as they form the working components of our RL policy behaviour policy implementation. It may disconnect the theoretical foundation from our actual algorithmic steps, making the methodology harder to follow and reproduce.\\n\\n**Q4:** Acknowledged and working on it.\\n\\n**Q5:** Acknowledged and working on it.\\n\\n**Q6:** Acknowledged and working on it.\\n\\n**Q7:** Acknowledged and working on it.\\n\\n**Q8:** Acknowledged and working on it.\\n\\n**Q9:** Acknowledged and working on it.\\n\\n**Q10:** Acknowledged and working on it.\\n\\n**Q11:** Fixed, equations 5 and 6 wrongly displayed the current excitatory and inhibitory terms $v_n,u$ over the time integration constant $\\\\tau$, which is incorrect. We have moved them outside of the equation parenthesis.\\n\\n**Q12:** Fixed, thank you.\\n\\n**Q13:** Acknowledged and working on it.\\n\\n**Q14:** Acknowledged and working on it.\\n\\n**Q15:** Fixed, added reference (line 268). \\n\\n**Q16:** They represent the weights of the upstream Bayesian Linear Regression model (line 287, 304).\\n\\n**Q17:** Fixed, developped further (lines 300-303).\\n\\n**Q18:** Fixed (equation13).In the DL RNN implementation, no inhibitory neurons are present. Instead, standard DL neurons with tanh activation maintain the attractor state through their hidden state dynamics, rather than through biological-like lateral inhibition. We are working to provide visualisation of the RNN forward and hidden state dynamics in a appendix section.\\n\\n**Q19:** The action space is split into two separate rings with weak connections between them, this implementation is not explicit in the original paper. We are also working to provide an appendix section that provides insights on integration for the different models and environments, including single anddouble ring implementations.\"}",
"{\"summary\": \"This paper proposes a novel approach to reinforcement learning (RL) that incorporates ring attractors\\u2014models inspired by neural mechanisms that encode spatial information\\u2014into the action selection process. By organizing actions on a ring structure and decoding decisions through neural activity, the method aims to improve learning speed, prediction accuracy, and stability in deep RL. When evaluated on the Atari 100k benchmark, it reported a 53% performance increase over baseline models, particularly in games with spatial components.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"It is interesting to consider using ring attractor structure for decision making in RL.\\n\\nThe paper is well organized.\", \"weaknesses\": \"In the first approach, the rationale for using action values and their variances as connection weights is not well established. For instance, is there any prior work that employs Q-values as connection weights within a sub-network? Additionally, how should cases be handled when the Q-value exhibits very low variance? In Equation 7, adding the variance in the denominator of the weight calculation can lead to instability or weight divergence when the variance is close to zero.\\n\\nThe second approach, as presented in Section 3.2: \\u201cDL-based ring attractor integrated into the RL agent\\u201d is not well motivated. From \\u201cRNNs perform well in modeling sequential data and in capturing temporal dependencies for decision making \\u201d to state that \\u201cRNNs mirror ring attractors\\u2019 temporal dynamics, with their recurrent connections and flexible architecture emulating the interconnected nature of ring attractor neurons. \\u201d The relationship between the two parts, if any, are hard to find. \\n\\nUngrounded claims. \\u201cSpiking neural networks (SNNs) are employed for their biological plausibility and efficient temporal information processing, which closely mimic natural neuronal behavior.\\u201d In this work, there is no spiking neural network. The ring attractor used a rectified linear unit, which is a rate unit, rather than a spiking unit. The model in this work is also not \\\"biologically plausible model\\u201d (Line 122). Both of the above claims are misleading and inaccurate.\", \"questions\": \"How the \\\\mu_a in Eq. 12 be used in the model? Is it the mean in the expression of x_n in Eq. 1? But it was denoted as \\\\alpha_a in Eq. 7.\\nCould the authors explicitly explain the relationship between these variables and their usage throughout the model?\\n\\nIs there ring attractor at all in the model? The units interact through Eq. (5-6) COULD develop a ring attractor, but not all system interact like this actually develop ring attractor. This work did not spend any effort to confirm the system actually develop a ring attractor when using Q value as the connection weight. Could the authors provide specific evidence or analyses that would demonstrate the emergence of a ring attractor in their system? This can be achieved through visualizations or metrics that confirm the presence of ring attractor dynamics, as demonstrated in previous studies, such as Seung (1996, PNAS) and Kim (2017, Science). Although the 2017 Science paper is cited in this work, it is important to note that the concept of ring attractors in the brain has a longer history and a rich literature, tracing back to the 1996 PNAS study.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks to the authors for taking the time to respond to my feedback.\", \"1\": \"Re: \\\"We appreciate the reviewer's perspective on ring attractors, and would like to offer a complementary interpretation based on spatial encoding evidence... ...demonstrated that ring attractors represent spatial awareness through heading direction... \\\"\\n\\nActually, this is exactly what I am saying, heading direction is an \\\"internal/cognitive/state variable\\\" and isn't directly encoding an action space. There are downstream computations that combine the internal heading angle variable and other internal variables (e.g. current goal heading) to generate an action (with RAs not playing a direct role).\", \"2\": \"\\\"Baselines and 1-D circular variable action space\\\" -- Thank you, this is a critical baseline. Looking forward to seeing it.\\n\\nThank you for addressing the other comments!\"}",
"{\"summary\": \"The authors propose incorporating ring attractors, a biologically-inspired neural circuit model for spatial information encoding, into reinforcement learning (RL) agents to improve action selection, particularly in spatially structured environments.\", \"the_authors_explore_two_ring_attractor_implementations_on_rl_tasks\": \"first, an exogenous spiking neural network (SNN) and second, a regular RNN based ring attractor.\\nThey report significant performance improvements over baselines on the Atari 100k benchmark.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"# Strengths\\n* It was interesting to see how a neural architecture found in biology was integrated into a machine learning setting. Physical ring attractors are found in biological organisms, such as the Drosophila fruitfly, and offer a strong inductive bias that could be used to increase sample efficiency as was done in this paper. (Note reservations below)\\n* Incorporating uncertainty in addition to tracking the mean is a nice contribution, potentially leading to more robust and adaptive behavior.\\n(Note reservations below)\", \"weaknesses\": [\"# Weaknesses\", \"It was not clear at all to me why Spiking Neural Networks need to be involved in this paper. Regular \\\"rate coding\\\" RNNs should be enough to encode a ring-attractor. To the best of my knowledge, ring attractors in biology encode internal/cognitive/state variables, and aren't directly encoding action spaces. This discrepancy raises concerns about the biological relevance of the proposed approach.\", \"More clarity is needed on why the baselines chosen are justified. I think a better baseline would be to use a continuous 1-D circular variable (e.g. encoded as <sin \\\\theta, cos \\\\theta>) as the action space and then discretize it to match the action-space of the environment. This would be a more appropriate baseline to isolate the benefits of the ring attractor architecture itself.\", \"The paper doesn't fully explain the role and impact of uncertainty quantification. Maybe the authors should be exploring a simpler ring-attractor model which doesn't include uncertainty quantification.\"], \"questions\": [\"# Additional suggestions for improvement\", \"L135: \\\"spatial exploitation\\\"?\", \"L193: what are m and n here?\", \"L265: what is algorithm referring to here in \\\"function approximation algorithm\\\"?\", \"L343: typo: abs(m - m)\", \"L377: could you clarify how one neuron corresponding to one (s,a) still makes this a ring-attractor\", \"L406: typo: cummulative\", \"L408: How was \\\\pi/6 chosen?\", \"L417: Can you clarify where this comes from -- \\\"mean computational overhead ....\\\"?\", \"L460: Can you provide at least one example each for how a game's action space has been mapped to Single and Double configurations\", \"The paper could use an overhaul in its organization. e.g. the ablation tests are essential to justify/clarify how the ring-attractor helps.\", \"Related recent research worth citing\", \"Kutschireiter et al, \\\"Bayesian inference in ring attractor networks\\\", PNAS 2023\", \"Singh et al, \\\"Emergent behaviour and neural dynamics in artificial agents tracking odour plumes\\\", Nature Machine Intelligence 2023\", \"RI Wilson, \\\"Neural Networks for Navigation: From Connections to Computations\\\", Annual Review of Neuroscience 2023\", \"Xiong and Sridhar, \\\"Understanding the Influence of Uncertainty and Noise on Spatial Decision Dynamics\\\", Conference on Cognitive Computational Neuroscience 2024\", \"Note that I did not spend significant time on the supplement\", \"12/3: Increased my score from 5 to 6 for the authors' efforts revising the manuscript and responding to my feedback.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"We would like to let the reviewer know we have provided an appendix section (A.2.3) showing the emergence of sustained ring patterns in our experiments. We have also expanded our work with new experimental results and other supplementary appendices, see Manuscript Update at the top of the page for all changes made. We appreciate the reviewer's requests that helped develop the presentation of the key concepts in this research.\"}",
"{\"comment\": \"We sincerely thank the reviewer for their thorough examination of our work's fundamental principles:\\n\\n**1 Q-values and variance lower bound**\\n\\nWe thank the reviewer for their careful reading of our work. However, we believe there may be a misunderstanding about how Q-values are used in our model. The Q-values are not used as connection weights within the ring attractor network. Instead, the connection weights between neurons are fixed and defined by distance-dependent functions (w_(E_m\\u2192E_n) = e^(-d^2(m,n))), which define the fundamental topology and dynamics of the ring attractor.\\nQ-values are used only as input signals (scaling factors K_i) to the ring attractor network, as shown in equation 7. This formulation allows learned action values to influence the ring attractor's activity pattern while maintaining its core structural properties. Regarding potential numerical instability from variance terms, it only has relevance at the initialisation point, as the uncertainty estimates from our Bayesian approach maintain reasonable lower bounds due to the inherent uncertainty in value estimation.\\n\\n**2 DL RNN-based implementation**\\n\\nThe relationship between RNNs and ring attractors is fundamentally grounded in their shared ability to model continuous-time neural dynamics, as established by [Beer (1995)](https://journals.sagepub.com/doi/10.1177/105971239500300405) in their work on CTRNNs. As demonstrated in our exogenous model based on Touretzky's ring attractor, these networks inherently rely on recurrent connections to maintain stable attractor states, providing a natural motivation for using RNNs in our Deep Learning implementation. Additionally, we are working on an appendix section to visualise the emergence of sustained ring patterns in our experiments.\\n\\n**3 SNN and biological plausibility**\\n\\nAs highlighted by the reviewer, SNN is not relevant to this approach. We apologise for the misunderstanding and have corrected the methodology. Furthermore, we acknowledge ring attractors primarily serve heading direction representation, as shown by [Kim et al. (2017)](https://www.science.org/doi/10.1126/science.aal4835). Our innovation lies in adapting spatial awareness mechanisms for decision-making: just as biological ring attractors encode spatial representations for navigation, our model uses similar principles to organise actions in a circular topology. \\n\\n**Q1:** The symbol $\\\\alpha$ represents the preferred orientation for a given action $a$, while $\\\\mu_a$ represents the value associated with that action. For clarity, we have revised this notation since it was inconsistent with Equation 7, using now the same variable in both equations with the distinction that this one represents an average over a sum of samples $\\\\bar{Q}(s,a)$.\\n\\n**Q2:** The symbol $\\\\alpha$ represents the preferred orientation for a given action $a$, while $\\\\mu_a$ represents the value associated with that action. For clarity, we have revised this notation since it was inconsistent with Equation 7, using now the same variable in both equations with the distinction that this one represents an average over a sum of samples $\\\\bar{Q}(s,a)$.\\n\\n**Q3:** While the reviewer raises important points about verification of ring attractor dynamics, our model demonstrates key ring attractor properties inherent in its design and implementation:\\nThe spatial encoding of actions in our circular topology is not arbitrary, we enforce distance-dependent weighting through equations (5-6) that explicitly model local excitation and global inhibition, a remark of ring attractor dynamics.\\nThe performance improvements we observe in spatially-structured tasks (Table 1) suggest the model successfully maintains stable representations of spatial relationships between actions, consistent with ring attractor behaviour.\\nOur ablation study (Section A.1.2) shows that randomly disrupting the ring structure significantly degrades performance, indicating the ring topology actively contributes to information processing rather than being merely architectural.\\nWe acknowledge and are working on additional visualisations of neural activity patterns that would strengthen our claims.\\n\\nSeveral neural architectures in machine learning have drawn inspiration from biological principles. One common example is Convolutional Neural Networks, which were influenced by the hierarchical organisation observed in the mammalian visual system [Hubel and Wiesel, 1962](https://pmc.ncbi.nlm.nih.gov/articles/PMC1359523/). While our research addresses a different domain, we similarly attempt to incorporate insights from neural mechanisms, though we acknowledge the substantial abstraction involved in translating biological principles to computational frameworks.\\n\\nWe agree with the reviewer and moved [Zhang (1996)](https://www.jneurosci.org/content/16/6/2112) key proposal from the appendix to line 30 of the main text, reflecting its importance as first milestone.\"}",
"{\"comment\": \"The authors have partially addressed my questions and improved the presentation of the results in the revised manuscript. I would like to increase my score from 3 to 5.\\n\\nHowever, my question, \\\"Is there a ring attractor at all in the model?\\\" was not adequately addressed. I believe this is an important point that requires further clarification.\"}",
"{\"comment\": \"We thank the reviewer for the valuable review; we especially appreciate the insights provided to expand on the experimental setup and validation context.\\n\\n**1 SNN and Biological Relevance**\\n\\nAs pointed out by the reviewer, SNN is not relevant on this research, we apologise for the confusion. We've corrected the methodology to address this error. We use continuous-time recurrent neural networks (CTRNN) as the initial framework for the exogenous model.\\n\\nWe appreciate the reviewer's perspective on ring attractors, and would like to offer a complementary interpretation based on spatial encoding evidence. [Kim et al. (Science, 2019)](https://www.science.org/doi/abs/10.1126/science.aal4835#:~:text=Ring%20attractors%20are%20a%20class,the%20representation%20of%20heading%20direction). demonstrated that ring attractors represent spatial awareness through heading direction, a neural code that bridges internal representation and spatial behaviour. Their work showed how bump-like activity patterns maintain a persistent sense of orientation while smoothly updating with the fly's movements. This continuous integration of spatial state and movement suggests that ring attractors have evolved to handle representations that are simultaneously internal (maintaining spatial awareness) and behaviorally relevant (guiding navigation).\\n\\n**2 Baselines and 1-D circular variable action space**\\n\\nThe reviewer raises a very valid point that was considered and implemented by the authors. In our experiments, we mapped the highway's benchmark navigation action space as a one-dimensional circular variable. We recognise this was not explicitly mentioned. We are working to include this as a new section in the appendix, alongside information about other benchmark action spaces, ring attractor experimental layouts (both single and double configurations), and baseline-RA experimental integration.\\n\\n**3 Role and impact of uncertainty quantification**\\n\\nThe role of uncertainty quantification (UQ) in our ring attractor model is fundamental to both its theoretical foundation and practical performance for the exogenous ring attractor model. Our experimental results in Figure 2 clearly demonstrate that the uncertainty-aware version (BDQNRA-UA) consistently outperforms both the baseline (BDQN) and the simpler ring attractor model without uncertainty (BDQNRA), providing empirical justification for its inclusion.\\n\\nThe paper provides an explanation of UQ's role through mathematical formulations in Section 3.1.3 and explicit equations showing how uncertainty values (\\u03c3\\u2090) directly influence the Gaussian functions driving ring attractor dynamics. This integration aligns with biological insights, as shown by recent work [Kutschireiter et al., 2023](https://pubmed.ncbi.nlm.nih.gov/36812206/), demonstrating the utility and performance of integrating uncertainty into ring attractors.\\n\\nRather than adding unnecessary complexity, UQ is tightly integrated with the ring attractor architecture through the Gaussian activation functions (\\u03c3\\u1d62 = \\u03c3\\u2090 in Equation 7), creating a natural mechanism for uncertainty-aware action selection.\\n\\n**Q1:** Fixed, thank you.\\n\\n**Q2:** Here, $m$ and $n$ represent the positions (indices) of neurons within the ring network, where these indices define each neuron's location and determine the distance between any two neurons via $|m-n|$. \\n\\n**Q3:** A function approximation algorithm in this context is a method (typically a neural network) that learns to estimate Q-values (expected future rewards) instead of storing exact values for every possible state-action pair in reinforcement learning. It works by converting input states into feature vectors and using learned weights to approximate Q-values through the equation $Q(s,a) = \\u03b8\\u1d40x(s)$, making it practical for large or continuous state spaces.\\n\\n**Q4:** Fixed, thank you.\\n\\n**Q5:** While each neuron corresponds to an action-value pair, the ring attractor properties emerge from the circular connectivity pattern between neurons (defined by d(m,n) = min(|m - n|, N - |m - n|)) rather than from what the neurons represent.\\n\\n**Q6:** Fixed, thank you.\\n\\n**Q7:** Added brief explanation (line 410). \\n\\n**Q8:** In this context, \\\"mean computational overhead\\\" refers to the extra processing time or computational cost added by the ring attractor implementation. Whenstated\\\"297.3% overhead\\\", it means the integrated CTRNN ring attractor model (BDQNRA) took 3 times more time to run than the baseline model (BDQN).\\n\\n**Q9:** We are also to provide an appendix section that provides insights on integration for the different models and environments, including single and double ring implementations.\\n\\n**Q10:** We agree with reviewer in that ablation studies are essential to demonstrate the ring attractor's impact on learning performance across experiments. We are exploring a potential reorganization of the manuscript, though we are still determining which sections could be moved to the appendix given significant space constraints.\"}",
"{\"comment\": \"The new experiment, which evaluates BDQN, BDQNRA, and BDWNRA-UA models using a 1-D circular variable action space in the Highway environment, is presented in Figure 2, Section 4.1.\\n\\nApologies for the confusion, to clarity, orange highlighting denotes only corrections to the original text and does not apply to extended content added to improve its readability. Neither the new experiment nor the additional appendices (A.2.3, A.4, and A.5) are highlighted in orange.\"}",
"{\"comment\": \"We have addressed the reviewer's concern and expanded our work with new experimental results and supplementary appendices. See Manuscript Update at the top of the page for all changes made. We thank the reviewer for their thorough examination and encourage them to explore these additions.\"}",
"{\"comment\": \"We appreciate the reviewer's feedback and for acknowledging the improvements in our revised manuscript. We especially appreciate raising a key question regarding whether our model truly resembles ring attractor structure and behaviour in the DL implementation. We can condense this evidence for our DL agent through several key aspects:\\n\\n**Structural Properties:**\\nAs detailed in Section 3.2 and Appendix A.3, our model implements fundamental ring attractor dynamics through structured connectivity. The forward pass (V(s)) and hidden state (U(v)) are computed following distance-dependent weight functions defined in Equation 13, where both input-to-hidden connections and recurrent connections are laid out in circular topology. The learnable time constant \\u03c4 controls information integration into the ring attractor as in CTRNN-based approaches. This architecture allows us to regulate input signals to the RNN layer, balancing spatial relationships with task-specific learning while preserving ring attractor dynamics.\\n\\n**Empirical Validation:**\\nAs presented in our new appendix section A.2.3, we observe evidence of ring attractor dynamics preservation in our model, even when all parameter weights for the ring connections are set to be trainable. This means they can be evolved to what the DL algorithm expects to be the most efficient connectivity. As shown in Appendix A.2.3, the forward pass connections preserve the ring structure over training time, with distance-dependent decay patterns maintained throughout the learning process. This may suggest that the network naturally favours maintaining spatial topology for transmitting sensory information on a per-frame basis. The hidden-to-hidden connections demonstrate markedly different behaviour, evolving beyond their initial ring structure to develop specialised patterns that enable the encoding of environment-specific relationships between neurons in the hidden space.\\n\\n**Ablation Evidence:**\\nOur ablation studies in Section A.2.2 provide support for the relevance of the ring attractor structure. For the DL implementation, we removed the circular weight distribution to assess its importance. The significant performance degradation observed in the presented environment (Ms Pacman and Chopper Command) emphasises that the ring topology actively contributes to information processing rather than being merely architectural.\"}",
"{\"comment\": \"I appreciate the authors\\u2019 effort in addressing several common concerns raised by me and other reviewers. I have just a few remaining questions and would love to hear from the authors:\\n\\n**1. Role of the ring attractor in uncertainty quantification (UQ):** The authors mention UQ as one of the advantages of the RA model. However, in the BDQN-UA framework (which is the only model that embodies uncertainty estimation), the uncertainty $\\\\sigma_a\\\\$ is quantified through BDQN and then directly substituted into eq. 1. Then what role does the RA play in UQ? \\n\\n**2. What is the nature of the spatial relationship encoded by RA? And why does it improve action sampling even in the absence of UQ?** Based on eq. 1 & 13, nearby neurons in the ring share similar or correlated inputs and strengthen each other\\u2019s activity through local excitation. If nearby neurons are used to output values for actions targeting nearby locations, it would result in positive correlations in action values that are not necessarily desirable. For example, moving left and up in an Atari game may have quite different values and it is unclear a priori why one would want to enforce correlation among them. Could the authors elaborate on when such spatial correlation in action values are desirable? Furthermore, I noticed a few Atari games that were implemented in the original EffZero paper was not tested here (incl. Amidar, Assault, Demon Attack). Could the authors comment on why their action space may not be compatible with the RA?\\n\\n**3. I appreciate the newly added Appendices, though am still baffled at some implementation details.** E.g. 1) In line 349-351 the authors describe the input-to-hidden weights as fixed, though the first equation in eq. 13 contains the learnable parameter $\\\\lambda$, which also appears in the hidden-to-hidden weight. 2) in line 404, the authors mention that \\\\sigma_a is held fixed at $\\\\pi/6$ in BDQNRA, which \\u201cenables smooth action transition while preventing interference with opposing actions\\u201d\\u2014how does one determine this value for an arbitrary task with a different action space? 3) In eq. 19 of Appendix A. 4.1., is $Q(s,a)$ computed in one step or does it involve multiple inference steps as illustrated in Fig. 8?\\n\\n**4. To help clarify some of the questions above, would the authors be open to share to their code?**\"}",
"{\"comment\": \"The authors have responded to all my questions and they are mostly clear now, although there is a repetitive equitation in line 876. The rating is improved to 6.\"}",
"{\"metareview\": \"This paper proposes to use ring attractor network components into Q-learning based reinforcement learning. The general idea is to provide spatial information and relationships for actions (e.g. arrow-keys in video games) and induce correlations, rather than having RL agents learn actions as independent and separate choices.\\n\\nThe paper further claims that by adding ring attractors, this leads to uncertainty estimates over the Q-function, which allows better decision making over unseen environment areas.\", \"experiments_are_conducted_over\": \"* Super Mario Bros (discrete action space) and Highway (Most likely discrete?)\\n* A subset of the Atari100K benchmark\\n\\nwhere adding the ring attractor improves performance over base methods. \\n\\nOne core issue as an outsider to the ring attractor framework is that the paper has not been easy to follow, despite Figure 1 attempting to represent the mechanics. There is too much dense notation that makes the ring attractor's design difficult to imagine, and I think more effort is required to make it accessible and impactful to a wider audience.\\n\\nRegarding other issues, I relied on the reviewer discussion (see below).\", \"additional_comments_on_reviewer_discussion\": \"Post-rebuttal, this paper obtained an extremely balanced score between rejection and acceptance, i.e. (5,5,6,6). In the initial review cycle, the scores were lower (e.g. a 5 was previously a 3).\\n\\nThis was a personally difficult read for me, as I don't have enough background knowledge on ring attractors and their biological significance. Therefore I relied much more on the feedback of all the reviewers who do have such knowledge.\", \"the_main_issues_raised_by_reviewers_around_this_topic_have_been\": [\"There is no actual spiking neural network in the model / possible mathematical mischaracterizations of the network.\", \"It's unclear what contributions the ring attractor component brings to uncertainty quantification, when the last layer weights of the Q-function are already made to be probabilistic / allow Bayesian linear regression\", \"Whether a ring attractor is \\\"inside the model at all?\\\"\", \"While the authors have attempted to resolve this issues during the rebuttal, there are still clarity issues which remain (as seen from e.g. Reviewer xjv3's 19 questions on details). Given the additional borderline scores, I overall recommend rejection for now - I think this paper requires additional polishing to be resubmitted to the next ML conference.\"]}",
"{\"comment\": \"We have addressed the reviewer's concern by removing redundant equations from line 1011 that did not add substantive context to the section. We have also expanded our work, see Manuscript Update at the top of the page, with new experimental results and supplementary appendices, which we encourage the reviewer to examine.\"}"
]
} |
E5YnuidZ9W | Understanding Mode Connectivity via Parameter Space Symmetry | [
"Bo Zhao",
"Nima Dehmamy",
"Robin Walters",
"Rose Yu"
] | Neural network minima have been observed to be connected by curves along which train and test loss remain nearly constant, a phenomenon known as mode connectivity.
While this has enabled applications such as model merging and fine-tuning, its theoretical explanation remains unclear.
We propose a new approach to exploring the connectedness of minima using parameter space symmetry.
By linking the topology of symmetry groups to that of the minima, we derive the number of connected components of the minima of linear networks and show that skip connections reduce this number.
We then examine when mode connectivity and linear mode connectivity hold or fail, using parameter symmetries which account for a significant part of the minimum.
Finally, we provide explicit expressions for connecting curves in the minima induced by symmetry.
Using the curvature of these curves, we derive conditions under which linear mode connectivity approximately holds.
Our analysis highlights the role of continuous symmetries in understanding the neural network loss landscape. | [
"symmetry",
"mode connectivity"
] | Reject | https://openreview.net/pdf?id=E5YnuidZ9W | https://openreview.net/forum?id=E5YnuidZ9W | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"qm2JxLV1Sv",
"ocgcVbXNk5",
"njg08Redn1",
"l6bN892Mtk",
"kxpNGruXUf",
"kPHTHrxvMX",
"ZdzAox15g0",
"ZDOGZ0KfAT",
"UBXlhvIBgA",
"QsWwpWs6Y8",
"O5NUaJ6W3r",
"LnMc84li9V",
"Kj6ein7cCB",
"JN9G4ucn7o",
"IRn7kToavP",
"Dk5iSn0xpm",
"4BT9DGfVGd"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"decision",
"meta_review",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1732611096465,
1732656643895,
1732686034083,
1732522705519,
1732522958661,
1730393155758,
1732522595998,
1737524227672,
1734450337774,
1732522799622,
1730287521712,
1733073495379,
1730722271703,
1730686673363,
1732522830899,
1730470013309,
1732522902720
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12981/Reviewer_aNjo"
],
[
"ICLR.cc/2025/Conference/Submission12981/Reviewer_MzKE"
],
[
"ICLR.cc/2025/Conference/Submission12981/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12981/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12981/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12981/Reviewer_mbky"
],
[
"ICLR.cc/2025/Conference/Submission12981/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12981/Area_Chair_918u"
],
[
"ICLR.cc/2025/Conference/Submission12981/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12981/Reviewer_aNjo"
],
[
"ICLR.cc/2025/Conference/Submission12981/Reviewer_SRZz"
],
[
"ICLR.cc/2025/Conference/Submission12981/Reviewer_TNu1"
],
[
"ICLR.cc/2025/Conference/Submission12981/Reviewer_MzKE"
],
[
"ICLR.cc/2025/Conference/Submission12981/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12981/Reviewer_SRZz"
],
[
"ICLR.cc/2025/Conference/Submission12981/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Thank you for the replies and additions. I raise my score to acceptance.\"}",
"{\"title\": \"Response\", \"comment\": \"I thank the authors for engaging with my feedback and clarifying my concerns and questions!\\n\\n**Intuition for connectivity:** Thank you for clarifying, I think it is maybe worth pointing out in the main text that for most practical purposes, connectivity can be thought of as path connectivity, while still pointing out that this is not always true in a strict mathematical sense. This allows for better intuition in my opinion.\\n\\nMy concerns are all addressed.\"}",
"{\"comment\": \"We agree that path connectedness is useful for developing intuitions for connectedness and have added this to Section 3. Thank you for the suggestion.\"}",
"{\"comment\": \"Thank you for your detailed comments and positive feedback!\\n\\n> 1. What does it mean intuitively and geometrically if two network parameters are in the same connected component? As the authors point out, this does not imply being path-connected, so while it sounds convincing at first, it is actually not clear to me how this notion ties back to the usual geometric understanding of connectivity used in deep learning. I would appreciate if the authors could clarify things. I.e. how pathological are counter-examples of \\u201cconnected but not path-connected\\u201d?\\n\\nIntuitively, imagine the minimum of a loss function as a manifold or a high dimensional surface. Then two network parameters are in the same connected component if they reside on the same piece of this manifold. Connectedness ensures there is no separation of the space into disjoint non-empty open subsets, while path-connectedness allows one to construct continuous paths between points. \\n\\nWhile it is theoretically possible for two points in the same connected component to lack a path between them, such counterexamples are often specifically constructed and unlikely to be encountered in the context of deep learning. A classic example is the topologist\\u2019s sine curve $T = T_0 \\\\cup T_+$, where $T_0=\\\\{(x, y): x=0 \\\\text{ and } y \\\\in [-1,1]\\\\}$ and $T_+ = \\\\{(x, y): x \\\\in (0, 2/\\\\pi] \\\\text{ and } y = \\\\sin(1/x)\\\\}$. This space is connected but not path-connected since the infinitely oscillating waves prevent any continuous path from linking $T_+$ to $T_0$.\\n\\n> 2. One weakness in the \\u201clarge barrier\\u201d type of results is their global nature, i.e. there is no notion of what types of minima SGD actually finds. The counter-example (as far as I understood) for which a path is constructed for Prop 5.3 and 5.4 starts from a set of parameters and then constructs a new one using the rescaling symmetry. It is not clear to me how \\u201cdegenerate\\u201d these solutions are in the sense that SGD might never choose them due to its implicit bias. I believe there are actually results that show that SGD prefers certain parameters out of the re-scale orbit. In general I think it would be important to better highlight that this work deals with the loss landscape in a global sense, and is not restricted to the minimizers discovered by SGD.\\n\\nWe agree that our work studies the entire set of minima instead of the ones discovered by SGD, and have added clarifications regarding this aspect in the introduction. We do not believe this is necessarily a weakness though. While SGD is known to explore only a small portion of the minimum, it is less clear whether or to what extent other optimizers behave in similar ways. Additionally, a characterization of the complete set of minima might be useful beyond the context of optimization, such as in studying model complexity. Hence, although our paper does not specialize on minimizers discovred by SGD, the results could still be useful in understanding the loss landscape. \\n\\n\\n> 3. The word \\u201cconnected\\u201d has several meanings in this work and I sometimes was confused which one is currently used in a given part of the text. E.g. when two points are in the same connected component (e.g. when mapping with permutations), this is not the same thing as when two points cannot be connected linearly etc. I feel like the manuscript could do a better job at distinguishing these things.\\n\\n\\u201cConnected\\u201d indeed has multiple possible meanings. To distinguish different definitions, we have included clarifications in the paper and checked for consistency of the use of this concept. In the first part of the paper (Section 3 and 4), connectedness assumes its mathematical definition given in Section 3. From Section 5 onwards, when discussing mode connectivity, we use the term \\u201cmode connectivity\\u201d when points can be connected by arbitrary curves and always specify \\u201clinear mode connectivity\\u201d when only linear interpolation is considered.\"}",
"{\"title\": \"Official Comment by Authors [2/2]\", \"comment\": \"### Response to questions\\n\\n> 1 - The networks analyzed are invertible up to the output layer, meaning the output dimension matches the input dimension. How strictly is this condition required? Does switching to a one-dimensional output immediately yield negative results, as suggested in Section 5.2?\\n\\nThe invertibility condition is required when we want to establish a homeomorphism between the minimum and the symmetry group. When there is one, we can easily infer topological properties of the minimum from the symmetry group. When the network is not invertible, as in the example with skip connections, we are still able to analyze the connectedness of the minimum, but this requires more careful handling of analyzing multiple orbits. Switching to a one-dimensional output may change the number of connected components of the minimum, although the direction of change may depend on the exact loss function (Proposition A.8). There does not seem to be a connection between this change and the failure cases of linear mode connectivity in Section 5.2, which is primarily caused by the non-compact symmetry group.\\n\\n> 2 - According to [3], layer-wise mode connectivity is achievable. Does Proposition 5.3 contradict this result, or is there a possible connection? \\n\\n> [3] Adilova, Linara, Asja Fischer, and Martin Jaggi. \\\"Layer-wise linear mode connectivity.\\\"\\n\\nProposition 5.3 does not contradict with the layer-wise connectivity result in [3]. In the proof, we construct the two minima $W, W\\u2019$ by rescaling two layers. As a result, both layers are different between $W$ and $W\\u2019$. This is different from the setting in Theorem 4.1 in [3], where only one layer is different between the two sets of parameters. The empirical observations of the connectivity of certain groups of layers in [3] may reflect the implicit bias of SGD, which means it is possible that the minima reachable by SGD are approximately linearly connected, even though the complete set of minima may have more complex structures. We appreciate the pointer to this relevant work and have added a brief discussion in the updated paper. \\n\\n> 3 - Is proposition about residual connections restricted to 1 dimension?\\n\\nYes, as mentioned in the proposition ($n=1$) and subsection header. When the weight matrices are higher-dimensional invertible matrices, the number of connected components is further reduced to 2. We are working on relaxing the invertibility condition and will include full proofs in the final version of the paper.\"}",
"{\"summary\": \"This paper introduces a method to determine the number of components achieving zero loss in linear regression by identifying a homeomorphism between the symmetry in parameter space and the set of parameters yielding zero loss. It demonstrates that permutations can link previously isolated components and offers a novel perspective on the effectiveness of residual connections.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The concept of employing a homeomorphism to connect the general linear group with the set of parameters yielding zero loss, and quantifying the loss basins by counting the connected components, is both innovative and intriguing.\\n2. This paper offers a fresh rationale for the effectiveness of residual connections by examining the connected components within the minima of the loss function.\\n3. In Section 5.1, the paper shows that permutations can link otherwise disconnected loss minima.\", \"weaknesses\": \"1. Lack of limitations: Since the homeomorphism is specifically tailored to linear regression models, it is necessary to clearly state this limitation in the introduction.\\n2.\", \"questions\": \"1. Is it possible to experimentally validate the results of Sec. 6? Can we confirm that the valleys of the loss lines predicted by Eq. 7 correspond to the valleys in the two-dimensional heatmap of the loss landscape?\\n2. Is it possible to extend the analysis to more realistic models such as ResNet, which has a softmax function, using parameter symmetry and homeomorphic mapping?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your comments and positive feedback!\\n\\n> I see that there is no dependence on width of the network when considering the number of connected components, however permutation symmetries grow exponentially with the width of the network. This appears to be due to the fact we are studying a very simplified setting of linear networks. It\\u2019ll be useful for the readers to include a discussion on this.\\n\\nThe lack of dependence of the number of connected components on width is a result of the fact that the set of $n$ by $n$ invertible matrices ($GL_n(\\\\mathbb{R})$) has two connected components independent of $n$. Therefore, although wider networks have a larger symmetry group and a larger set of minima, the number of connected components remains unchanged. This is one example where connecting the minimum to symmetry groups brings out simple yet otherwise non-obvious results. We appreciate this question and have added a short discussion in Section 4.\\n\\n> It\\u2019ll improve the paper further to improve an example / figure for section 5.1 where permutations leads to mode connectivity. \\n\\nThank you for the suggestion. We have added an example in Appendix C.\\n\\n> It is well known that scale symmetries lead to a failure of linear mode connectivity however it is interesting that controlling the weight norms and control over the curvature leads to approximate linear connectivity.\\n\\n> How does this relate to empirical solutions explored by SGD? Specially because it appears that weight decay is necessary for lmc mod permutations.\\n\\nThis is an interesting observation, and we have included a short discussion at the end of Section 5.2. The empirical observation of mode connectivity and linear mode connectivity is likely due to the fact SGD typically only explores certain parts of the minimum, often referred to as implicit bias. Weight decay may further encourage SGD to favor certain minimizers. The subset of minima that is likely to be reached by SGD can therefore have very different structures than the entire set of minima.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"metareview\": \"This paper studies mode connectivity in neural networks taking a viewpoint from topology, and it provides a number of results on the structure of the loss landscape with particular emphasis on skip connections and permutations, highlighting the role of group actions. Specifically, the number of connected components at the 0-level set of the quadratic loss for invertible multi-layer perceptrons is computed. For one-dimensional spaces, it is shown that residual connections reduce the number of connected components. Next, the impact of permutations on connectedness is examined and an example where connectedness fails is provided. Finally, the authors characterize non-linear paths connecting minima and obtain bounds on their curvature.\\n\\nAll the reviewers agree that the topological perspective on mode connectivity is original, the results are new, and the paper is well written and accessible to a broad audience. These are all strengths of the paper. However, reviewer SRZz has raised an issue concerning the significance of the results. While simple results can in general be impactful, I concur with reviewer SRZz that the usefulness of the topological toolbox developed here towards understanding the loss landscape of neural networks remains unclear. More specifically, the first papers on mode connectivity date back to 2018; since then, there has been a lot of work in this direction, and the advantage of the methodology pursued by the authors w.r.t. this body of research is not evident.\\n\\nIn summary, the paper fails to provide a strong, novel, convincing insight regarding mode connectivity and, for this reason, I recommend rejection.\\n\\nI still find the perspective taken by the paper interesting and I do recommend that the authors keep working in this direction and provide more convincing evidence of the effectiveness of their approach. One possible direction (mentioned by multiple reviewers) would be algorithmic, in terms of the characterization of the implicit bias of (S)GD.\", \"additional_comments_on_reviewer_discussion\": \"The main weakness raised by reviewer SRZz is rather fundamental and requires re-thinking the approach and the results. As such, it could not be addressed in the short rebuttal period.\"}",
"{\"comment\": \"Thank you for your feedback and comments. You are right that this paper is held together by group actions - our main message is precisely that we can infer topological properties of the minimum from topological properties of symmetry groups. This connection, while obvious in hindsight, has been overlooked by mode connectivity researchers for years. We hope that our insights will help future research on loss landscapes. We also hope that the idea of inferring properties of an unknown object from a known one could inspire new work beyond this field.\\n\\n> The results are not very strong or interesting. Many standard basic results of math are presented like big theorems. Overall, this article does not make things simpler or clearer.\\n\\nWe appreciate your perspective, although we believe our results are novel and, according to other reviewers, will be of interest to the field. It is not our intention to make our theorems look like big results - we value simplicity over complexity. Our goal is to introduce new intuitions behind why and when mode connectivity holds and would appreciate concrete suggestions on how to make the presentation clearer.\\n\\n> Is there any reason to believe that there is a natural group action like the ones you explain beyond the cases where it is intuitively clear that there is one? \\n\\nYes, parameter space symmetry is prevalent in common architectures, and there exists complicated and possibly data-dependent symmetry group actions [1]. The high-dimensional nature of the minimum [2] also suggests possible group actions with nontrivial orbits. The existence and number of symmetries in neural network architectures is an active field. Recent work has also found symmetry groups and actions with an automated framework [3].\\n\\n> Can simulations reveal the presence of group actions or the relevance of your derivations beyond the obvious cases?\\n\\nIt is not clear whether simulation could reveal the presence of group actions, but other approaches, such as the learning-based symmetry discovery method in [3], have shown that there exist non-obvious parameter symmetries. Our paper complements these works by providing an application for the discovered symmetries.\\n\\n\\n*References:*\\n\\n[1] Zhao, Ganev, Walters, Yu, Dehmamy. Symmetries, flat minima, and the conserved quantities of gradient flow. arXiv preprint arXiv:2210.17216, 2022. \\n\\n[2] Cooper. The loss landscape of overparameterized neural networks. arXiv preprint arXiv:1804.10200, 2018.\\n\\n[3] Zhao, Dehmamy, Walters, Yu. Symmetry Discovery in Neural Network Parameter Spaces. UniReps 2024.\"}",
"{\"summary\": \"This paper introduces a topological framework to understand mode connectivity in general, and linear mode connectivity in particular. Initially, the authors use topological structures to calculate the number of connected components at the 0-level set of the quadratic loss for invertible multi-layer perceptrons. For one-dimensional spaces, they demonstrate that residual connections reduce the number of connected components. Subsequently, they examine the influence of permutations on connectedness and provide a setup where connectedness fails. Finally, conditions are presented for achieving low-loss curves that connect modes.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper proposes a very powerful framework to analyze properties of the loss surface of deep neural networks. In particular, understanding mode connectivity can shed light on the ways to optimize networks more efficiently. The paper generally provides clear and concise explanations of the proofs for its theorems and propositions.\", \"weaknesses\": \"However, in its current form, the paper does not fully clarify the connection between topological concepts and the loss landscape of deep neural networks. While it opens with a detailed and precise introduction to topological concepts, it then directly applies these concepts to loss surfaces and networks without discussing necessary assumptions (such as the invertibility of all networks considered). Additionally, there is little exploration of how the topological concept of connected components relates to the depth of the network or which elements correspond to orbits or groups, potentially with examples. Although all this does not detract from the technical accomplishments, it may reduce the paper\\u2019s impact by making it difficult for an unprepared reader to link the framework to neural network applications.\\n\\nIn the final section, the concept of curvature is used without clearly defining it in this context. Further, this section could connect more directly to practical applications by demonstrating empirical curves that connect modes and align with derived formulas (e.g., regarding loss growth). A similar issue is in Section 5 concerning symmetries.\", \"minor_issues\": [\"one of the first works on the algorithms for finding connectivity is [1], not [2]\", \"the parameter space (Param) is referenced in Section 3.3 before being formally defined\", \"please use \\\\citep where appropriate (e.g., the last paragraph of Section 5.2)\", \"[1] Singh, Sidak Pal, and Martin Jaggi. \\\"Model fusion via optimal transport.\\\"\", \"[2] Ainsworth, Samuel K., Jonathan Hayase, and Siddhartha Srinivasa. \\\"Git re-basin: Merging models modulo permutation symmetries.\\\"\"], \"questions\": \"1 - The networks analyzed are invertible up to the output layer, meaning the output dimension matches the input dimension. How strictly is this condition required? Does switching to a one-dimensional output immediately yield negative results, as suggested in Section 5.2?\\n\\n2 - According to [3], layer-wise mode connectivity is achievable. Does Proposition 5.3 contradict this result, or is there a possible connection?\\n\\n3 - Is proposition about residual connections restricted to 1 dimension?\\n\\n[3] Adilova, Linara, Asja Fischer, and Martin Jaggi. \\\"Layer-wise linear mode connectivity.\\\"\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"I have read the rebuttals and other reviews thank everyone for their time. Unfortunately it doesn't change my view, which is that I'm not learning anything about neural networks that I'm finding interesting in this paper.\"}",
"{\"summary\": \"The paper studies mode connectivity in neural networks modulo parameter space symmetries from the perspective of topology.\\n\\nAuthors begin by the counting the connected components of observation that for Euclidean distance as the loss function, the minima of a deep linear network with $l$ hidden layers and invertible weight matrices has $2^{l-1}$ connected components, followed by the observation that having skip connections similar to ResNets reduces the space of connected components. \\n\\nFor deep linear networks, authors show that one can reduce connected components if one takes into account for permutations. \\n\\nAuthors then use layer wise scale symmetries in deep networks to show that linear mode connectivity doesn\\u2019t hold, however if one controls the weight norms for each layer, we can control the error barrier incurred by linear interpolation within the same connected component. \\n\\nFinally, authors introduce general symmetry induce curves that parameterizes the level set of the loss, authors use to curvature of the curve to give sufficient condition for when approximate linear connectivity holds.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"This is a well written paper and authors include sufficient background to make the paper readable for a non-expert in topology, like myself to follow all the results and make inferences.\", \"Authors provide a novel analysis studying mode connectivity in deep linear networks.\", \"Authors make a number of contributions studying necessary conditions for mode connectivity and approximate linear connectivity that can be used to understand symmetries in neural networks.\"], \"weaknesses\": [\"I see that there is no dependence on width of the network when considering the number of connected components, however permutation symmetries grows exponentially with the width of the network. This appears to be due to the fact we are studying a very simplified setting of linear networks. It\\u2019ll be useful for the readers to include a discussion on this.\", \"It\\u2019ll be improve the paper further to improve an example / figure for section 5.1 where permutations leads to mode connectivity.- It is well known that scale symmetries lead to a failure of linear mode connectivity however it is interesting that controlling the weight norms and control over the curvature leads to approximate linear connectivity.\", \"How does this relate to empirical solutions explored by SGD? Specially because it appears that weight decay is necessary for lmc mod permutations.\"], \"questions\": \"It is well known that scale symmetries lead to a failure of linear mode connectivity however it is interesting that controlling the weight norms and control over the curvature leads to approximate linear connectivity.\\n\\n- How does this relate to empirical solutions explored by SGD? Specially because it appears that weight decay is necessary for lmc mod permutations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors investigate mode connectivity from a mathematical perspective. Several results are obtained: (1) The number of connected components of the set of minimizers is characterized in case of linear networks with and without skip connections, where adding skip connections reduced the number of components. (2) In case of 2 layers, they show how permutations can map points to different components, thus \\u201cconnecting\\u201d them and shedding light on recent empirical observations. (3) Next the authors also show that linear mode connectivity does not hold in case of ReLU networks, and that permuting the last two layers does not reduce the barrier either. (4) Finally, the authors characterize non-linear paths connecting such minima and obtain bounds on their curvature, which measures how far away one is from a \\u201clinear mode connectivity\\u201d regime.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-organized and states all the mathematical results in the Preliminaries section, making this a largely self-contained work and thus easier to read and understand. Linear mode connectivity is still lacking a proper mathematical understanding to this day, making this submission thus a timely contribution.\\n2. The authors manage to gain quite some insight into the problem with rather mathematically elementary tools, relying on topological properties and results from group theory. I appreciate the result showing that skip connections reduce the number of components, which is in-line with what people observe in practice in terms of easier optimization. \\n3. It is also quite nice that in case of two layers, the authors manage to show that permutations indeed connect the components back. While things in practice might be significantly more complicated than the setting considered in this work, I still believe this is a good first step towards obtaining a better understanding of this intriguing phenomenon.\", \"weaknesses\": \"1. What does it mean intuitively and geometrically if two network parameters are in the same connected component? As the authors point out, this does not imply being path-connected, so while it sounds convincing at first, it is actually not clear to me how this notion ties back to the usual geometric understanding of connectivity used in deep learning. I would appreciate if the authors could clarify things. I.e. how pathological are counter-examples of \\u201cconnected but not path-connected\\u201d?\\n2. One weakness in the \\u201clarge barrier\\u201d type of results is their global nature, i.e. there is no notion of what types of minima SGD actually finds. The counter-example (as far as I understood) for which a path is constructed for Prop 5.3 and 5.4 starts from a set of parameters and then constructs a new one using the rescaling symmetry. It is not clear to me how \\u201cdegenerate\\u201d these solutions are in the sense that SGD might never choose them due to its implicit bias. I believe there are actually results that show that SGD prefers certain parameters out of the re-scale orbit. In general I think it would be important to better highlight that this work deals with the loss landscape in a global sense, and is not restricted to the minimizers discovered by SGD.\\n3. The word \\u201cconnected\\u201d has several meanings in this work and I sometimes was confused which one is currently used in a given part of the text. E.g. when two points are in the same connected component (e.g. when mapping with permutations), this is not the same thing as when two points cannot be connected linearly etc. I feel like the manuscript could do a better job at distinguishing these things.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your comments and positive feedback!\\n\\n> Lack of limitations: Since the homeomorphism is specifically tailored to linear regression models, it is necessary to clearly state this limitation in the introduction.\\n\\nThank you for the suggestion. We have made the limitations explicit in the introduction. However, we would like to point out that while our examples of homeomorphism between the minimum and the symmetry group are limited to linear regression models, our framework can be applied to networks without a homeomorphism and different loss functions. For example, when the minimum comprises more than one orbits, we can still obtain the number of components by analyzing the connectedness of each orbit. Our method can also be generalized to loss functions other than the mean square loss. \\n\\n**Response to Questions**\\n\\n> 1. Is it possible to experimentally validate the results of Sec. 6? Can we confirm that the valleys of the loss lines predicted by Eq. 7 correspond to the valleys in the two-dimensional heatmap of the loss landscape?\\n\\nYes, we have added experiments showing that the inequality in Proposition 6.1 holds empirically (Figure 3a), and the loss on the curves induced by approximate symmetry is consistently low as predicted by Proposition 6.1 (Figure 3b,c). Since these curves live in a high dimensional space, it is not straightforward to produce a two-dimensional heatmap. Nevertheless, we hope it suffices to show that the loss at every point of the curve is low compared to the loss on linear interpolations between two minima.\\n\\n> 2. Is it possible to extend the analysis to more realistic models such as ResNet, which has a softmax function, using parameter symmetry and homeomorphic mapping?\\n\\nWe believe it is possible to extend our results to a network with a softmax function. Softmax is known to have a translational symmetry, which means that points on the minimum can have different network outputs before applying the softmax while giving the same final output after the softmax. For each possible network output, the connectedness of the set of minima corresponding to that output can be analyzed by methods from our paper. The connectedness of the union of these sets, or the entire minimum, can then be obtained by analyzing the connectedness of the set of outputs that map to the same value after softmax. We will include a precise formulation and full proofs in the final version of the paper.\"}",
"{\"summary\": \"This paper studies the space of minima of neural networks (mostly in the linear case) via the angle of group symmetries. A number of standard results in math are listed, and then a number of small results about the structure of minima are given, with some emphasis on the role of skip connections and on the permutations. Each result highlights the role of group actions and this is the main originality of the paper. Overall, I do not see any results that are particularly striking or insightful to understand neural networks in practice. The derivations do not shed a light on any phenomenon of interest, and the presence of group actions when it is not already almost trivial is not established. As a result, while the idea of looking at things from the angle of group action is appealing and elegant, it does not bring convincing additional insight to study the hard question of the minimum landscape of nonlinear neural networks. I would encourage the authors to keep looking in this direction, but something striking would be needed for me to excited.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"Rigorous, mathematically clean, reasonably clear article.\", \"weaknesses\": \"The results are not very strong or interesting. Many standard basic results of math are presented like big theorems. Overall, this article does not make things simpler or clearer.\", \"questions\": \"Is there any reason to believe that there is a natural group action like the ones you explain beyond the cases where it is intuitively clear that there is one?\\n\\nCan simulations reveal the presence of group actions or the relevance of your derivations beyond the obvious cases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comment by Authors [1/2]\", \"comment\": \"Thank you for your comments! We are encouraged that you consider our framework powerful for analyzing the loss surface. We address your comments and questions below.\\n\\n> However, in its current form, the paper does not fully clarify the connection between topological concepts and the loss landscape of deep neural networks. While it opens with a detailed and precise introduction to topological concepts, it then directly applies these concepts to loss surfaces and networks without discussing necessary assumptions (such as the invertibility of all networks considered). Additionally, there is little exploration of how the topological concept of connected components relates to the depth of the network or which elements correspond to orbits or groups, potentially with examples. Although all this does not detract from the technical accomplishments, it may reduce the paper\\u2019s impact by making it difficult for an unprepared reader to link the framework to neural network applications.\\n\\nWe appreciate your suggestions on clarifying the connection to topological concepts. We have attempted to make the assumptions clear by stating them before each proposition as well as the subsection headers. Since the topological concepts from the preliminary section are mostly used in proofs, we did not reference them often in the main text. However, we have made an effort to explain the topological intuitions in the proof sketch and explanation of the theorems. We also hope the last two corollaries in Section 3 provide some correspondence, or at least a hint of connection, between the topological concepts and elements in neural networks. We will expand Section 3 by including more connection to neural networks if space permits.\\n\\n> In the final section, the concept of curvature is used without clearly defining it in this context. Further, this section could connect more directly to practical applications by demonstrating empirical curves that connect modes and align with derived formulas (e.g., regarding loss growth). A similar issue is in Section 5 concerning symmetries.\\n\\nThank you for these suggestions. In the final section, we have added a formal definition of curvature, as well as experiments showing that the loss on the curves induced by approximate symmetry is consistently low, as predicted by Proposition 6.1 (Figure 3b,c). For Section 5, we added a visualization showing that the loss barrier on a linear interpolation between two minima in a homogeneous network can become unbounded, as predicted by Proposition 5.3 (Figure 4 in Appendix C).\\n\\n> Minor Issues.\\n\\nThank you for pointing these out. We have added the reference, corrected the notation for the input space of $L$ in Section 3.3, and replaced \\\\citet with \\\\citep where appropriate.\"}"
]
} |
E5YmIBvOqV | Large Convolutional Model Tuning via Filter Subspace | [
"Wei Chen",
"Zichen Miao",
"Qiang Qiu"
] | Efficient fine-tuning methods are critical to address the high computational and parameter complexity while adapting large pre-trained models to downstream tasks.
Our study is inspired by prior research that represents each convolution filter as a linear combination of a small set of filter subspace elements, referred to as filter atoms. In this paper, we propose to fine-tune pre-trained models by adjusting only filter atoms, which are responsible for spatial-only convolution, while preserving spatially-invariant channel combination knowledge in atom coefficients.
In this way, we bring a new filter subspace view for model tuning.
Furthermore, each filter atom can be recursively decomposed as a combination of another set of atoms, which naturally expands the number of tunable parameters in the filter subspace.
By only adapting filter atoms constructed by a small number of parameters, while maintaining the rest of model parameters constant, the proposed approach is highly parameter-efficient. It effectively preserves the capabilities of pre-trained models and prevents overfitting to downstream tasks.
Extensive experiments show that such a simple scheme surpasses previous tuning baselines for both discriminate and generative tasks. | [
"Efficient Fine-tuning",
"Filter Decomposition",
"Filter Subspace"
] | Accept (Poster) | https://openreview.net/pdf?id=E5YmIBvOqV | https://openreview.net/forum?id=E5YmIBvOqV | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zf37VBOapl",
"xAs75awy45",
"uf07D0Buf7",
"pLJql2Dn05",
"p2duRNCKQr",
"j8heNwbaDi",
"iQrKNZODCe",
"i43u3LJGh1",
"RvyKulgu7m",
"RRtGNykIHq",
"Pkqlhzwsfz",
"FcqSSlhMPX",
"9mIJmBYcn1",
"0Soe2LxirN"
],
"note_type": [
"official_comment",
"official_review",
"decision",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1732516518809,
1730683854157,
1737523623217,
1730082949195,
1732647395984,
1734769703566,
1732585193136,
1732571935826,
1732385604004,
1732385572061,
1732384880576,
1732385310796,
1730275527079,
1732760890829
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission4177/Reviewer_gvwM"
],
[
"ICLR.cc/2025/Conference/Submission4177/Reviewer_MS4b"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission4177/Reviewer_gvwM"
],
[
"ICLR.cc/2025/Conference/Submission4177/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4177/Area_Chair_fEBX"
],
[
"ICLR.cc/2025/Conference/Submission4177/Reviewer_gvwM"
],
[
"ICLR.cc/2025/Conference/Submission4177/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4177/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4177/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4177/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4177/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4177/Reviewer_GeoC"
],
[
"ICLR.cc/2025/Conference/Submission4177/Reviewer_GeoC"
]
],
"structured_content_str": [
"{\"title\": \"Official Comments by Reviewer gvwM\", \"comment\": \"Thank you for the detailed responds.\\n\\nMy concerns about the additional cost and GPU overhead have been well. addressed. As for the comparison on VTAB and Dreambooth, I have seen the proposed method achieves the best performance.\\n\\nHowever, I also noticed the experimental results are not consistent to the original paper. For instance, in the FacT paper, the reported VATB acc is 75.6 with 0.069M #param. But in the rebuttal, the corresponding results are 73.23 with 0.26M #param. So I wonder whether the experimental setup is different here? I still suggest to use the official implementation of previous works to ensure the sota performance of the proposed methods. \\n\\nIn the Dreambooth task, since the OFT paper use the DINO, CLIP-I, CLIP-T and LPIPS as the metrics, what is the performance of the proposed method on these metrics?\"}",
"{\"summary\": \"This paper presents a new way to decompose convolutional layers and experimented a new way to fine-tune large models with those layers by adjusting a small number of parameters based on the decomposition. In particular, the observation that maintaining fixed atom coefficients leads to better results is showed based on the experimental results. Experimental results were compared with other PEFT methods such as LoRA and LoHa and showed interesting results in the provided examples.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper presents an interesting parameter decomposition method to split parameters in large convolutional models.\\n2. In some situations as shared in the paper, the proposed method can achieve comparable or better results by fine-tuning an even smaller amount of parameters.\", \"weaknesses\": \"While the proposed decomposition and fine-tuning method is different, this method adjusts parameters in the big model. Comparatively, LoRA serves as a plug-in, which reduces the chance to hurt the capacity of pre-train models.\", \"questions\": \"Parameter fine-tuning often involves one large pre-trained model and many small tasks. Multiple LoRA's can be plug-in to one model even though there could be conflicts, to solve that scenario. How could this method achieve that?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"This work proposes a PEFT technique for convolution by decomposing the convolutional kernel into spatial and channel components and only fine-tuning the spatial components. Furthermore, the authors introduce a second-order decomposition technique to allow for the training of more parameters. The author validate the effectiveness of this method on various backbone models, such as ResNet50, ConvNeXt, and Stable Diffusion.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The idea of decomposing the convolutional kernel and only fine-tuning the spatial convolution part is interesting, providing options for fine-tuning convolution layers.\", \"The explanation of the methods and the mathematical formulas are clear.\"], \"weaknesses\": [\"The paper requires additional sparse coding at the start of training to decompose convolutional atoms and coefficient atoms. Due to the need to solve optimization problems, I express concern about its efficiency. The computational cost and time delay associated with this part need to be provided.\", \"The benchmarks compared in Tables 1 and Table 4 are not up-to-date. LoRA was proposed in 2021, but it is now 2024. To my knowledge, a series of related tasks have been continuously proposed in discriminative tasks in recnet years, such as SSF, FacT, Adapter, Compactor, BinaryAdapter, etc. The authors are encouraged to include the latest methods to demostrate the effectiveness of the proposed method.\", \"The evaluation metrics for the generation task seem non-standard. It appears that the authors only compared results under one theme image, i.e., the castle. As far as I know, exsting common experimental setups for evaluating subject-driven generation tasks 750 prompt-image pairs, such as in OFT. The experimental setup in this paper only take one subject image, makeing it difficult to prove the effectiveness of the method, especially considering the inherent randomness of diffusion. In addition, I also suggest adding OFT and COFT in the compared methods, which are important and widely used baselines in diffusion model fine-tuning, and are included in the HuggingFace's PEFT library.\"], \"questions\": [\"Besides comparing the number of parameters, what is the GPU memory footprint during fine-tuning for the proposed method? Considering that there is already work indicating that PEFT methods are generally not memory-efficient.\", \"The idea of decomposing the convolutional kernel and only fine-tuning filter atoms is interesting. However, the experiments in this paper on various tasks do not solid to support the effectiveness of the method. It is necessary to further increase the comparison methods and improve experimental settings. Condidering all the factors, I tend to give a rating of below the acceptance level.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer gvwM\", \"comment\": \"Thank you for the suggestions. We have incorporated the feedback provided and included content from the rebuttal in our revised manuscript.\"}",
"{\"metareview\": \"The paper presents the idea of filter subspace tuning, i.e. to fine-tune convolutional neural networks by only adapting spatial filter atoms while leaving the linear combination across channels frozen. The paper presents a clear motivation for the idea, which allows parameter efficient fine-tuning of large models and can achieve results comparable to full fine-tuning as demonstrated on discriminative and generative tasks. After the rebuttal, all reviewers agree on the benefit of the approach.\", \"additional_comments_on_reviewer_discussion\": \"Initially, the reviewers raises several questions regarding for example the comparison to lora, the difference of spatial-only convolutions to group convolutions, details on the memory usage and computational costs and the number of fine-tuning parameters. The rebuttal answered these questions so that all reviewers provide a final score of 6 (two reviewers have raised their score to 6 after the rebuttal).\"}",
"{\"title\": \"Official Comment by Reviewer gvwM\", \"comment\": \"Thanks for the author response. With the clarified and additional results, my primary concern has been addressed. I still recommend that the authors further refine their writing. For instance, the qualitative presentation in Dereambooth, such as FIg.1, 5,8,9 ,should not constrained to 'castle' only and is recommended to add other subject Additionally, in Tab.4, the comparison on VATB with recent PEFT methods such as FacT, SSF should also be added to offer a wider benchmark. I will raise my score to 6.\"}",
"{\"title\": \"Response to Reviewer gvwM\", \"comment\": \"Thanks to the reviewer for the follow-up feedback. We provide responses to your questions below.\\n\\n**Q1: Why are the results in the rebuttal different from the FacT paper?**\", \"a1\": \"FacT adopts a different method for calculating the average accuracy. In Table 1 of FacT, they use the average of group-wise average accuracy as the final accuracy. Specifically, they first get the averaged accuracy within each group, and then calculate the overall accuracy as the mean of these three averaged accuracy:\\n$[(70.6+90.6+70.8+99.1+90.7+88.6+54.1)/7+(84.8+96.2+84.5+75.7)/4+(82.6+68.2+49.8+80.7+80.8+47.4+33.2+43.0)/8]/3=(80.64+85.3+60.71)/3=75.6$.\\n\\nFollowing the same process, our method achieves 76.3, which still outperforms FacT. Specifically,\\n$[(70.5+96.3+74.4+99.4+92.1+90.4+52.7)/7+(85.9+96.+88.6+75.8)/4+(77.4+62.2+53.+82.6+78.1+55.1+31.7+39.5)/8]/3=(82.26+86.58+59.95)/3=76.3$.\\n\\nHowever, to maintain consistency with our paper in the rebuttal, we chose the standard way by calculating the average accuracy across all tasks, following the setting in SSF.\\nSpecifically, for FacT,\\n$(70.6+90.6+70.8+99.1+90.7+88.6+54.1+84.8+96.2+84.5+75.7+82.6+68.2+49.8+80.7+80.8+47.4+33.2+43.0)/19=73.23$.\\n\\nFor our method,\\n$(70.5+96.3+74.4+99.4+92.1+90.4+52.7+85.9+96.+88.6+75.8+77.4+62.2+53.+82.6+78.1+55.1+31.7+39.5)/19=73.77$.\\n\\nThe FacT paper excludes the parameters of the linear head, resulting in 0.069M parameters. To ensure consistency with our paper and SSF, we include the parameters of the linear head, which amount to 0.04M. The revised table is presented below.\\n\\n| Method | C100 | Cal. | DTD | Fl102 | Pets | SVHN | S397 | P.C. | Euro | R45 | Retin. | Cl./c | Cl./d | DM | KITTI | d./loc | d./ori | N/azi | N/ele | params. (M) | avg |\\n|---|------|------|-----|-------|------|------|--------|------|----------|--------|-------|-------|-------|------|-------|--------|--------|----------|----------|---------|-----|\\n| Adapter | **74.1** | 86.1 | 63.2 | 97.7 | 87.0 | 34.6 | 50.8 | 76.3 | 88.0 | 73.1 | 70.5 | 45.7 | 37.4 | 31.2 | 53.2 | 30.3 | 25.4 | 13.8 | 22.1 | 0.27 | 55.82 |\\n| FacT | 70.6 | 90.6 | 70.8 | 99.1 | 90.7 | 88.6 | **54.1** | 84.8 | **96.2** | 84.5 | 75.7 | **82.6** | **68.2** | 49.8 | 80.7 | **80.8** | 47.4 | **33.2** | **43.0** | **0.11** | 73.23 |\\n| SSF | 69.0 | 92.6 | **75.1** | **99.4** | 91.8 | 90.2 | 52.9 | **87.4** | 95.9 | 87.4 | 75.5 | 75.9 | 62.3 | **53.3** | 80.6 | 77.3 | 54.9 | 29.5 | 37.9 | 0.24 | 73.10 |\\n| Ours | 70.5 | **96.3** | 74.4 | **99.4** | **92.1** | **90.4** | 52.7 | 85.9 | 96. | **88.6** | **75.8** | 77.4 | 62.2 | 53. | **82.6** | 78.1 | **55.1** | 31.7 | 39.5 | 0.22 | **73.77** |\\n\\n**Q2: The OFT paper uses the DINO, CLIP-I, CLIP-T and LPIPS as the metrics, what is the performance of the proposed method on these metrics?**\", \"a2\": \"Our experiment follows the setup described in (Yeh, et al. ICLR 2024). We use DINO-v2-large to extract image embeddings and evaluate the *fidelity* of the generated images. We employ OpenCLIP (CLIP-ViT-giant), trained on a larger dataset, to assess *T2I alignment*. In comparison, the OFT paper employs DINO-v1-small for the DINO score, while CLIP-T and CLIP-I are based on CLIP-ViT-large.\\n\\nAs suggested by the reviewer, we reuse the metrics from the OFT paper and present the results in the table below.\\nConsidering the models used in the OFT paper are outdated, we have kept the metrics in our paper unchanged.\\n\\n| | LoRA | LoHa | LoKr | DiffFit | BitFit | OFT | COFT | Ours (C1)| Ours (C2)| Ours (C3)|\\n|---------------|-------|-------|-------|----------|--------|-----|------|------|------|------|\\n| DINO | 0.68 | 0.674 | *0.682* | 0.621 | 0.581 | 0.633 | 0.631 | 0.588 | 0.634 | **0.686** |\\n| CLIP-I | 0.800 | *0.801* | 0.798 | 0.774 | 0.758 | 0.788 | 0.784 | 0.750 | 0.787 | **0.803** |\\n| CLIP-T | 0.209 | 0.203 | 0.212 | 0.232 | *0.239* | 0.236 | 0.234 | **0.248** | *0.239* | 0.205 |\\n| LPIPS | 0.735 | 0.710 | 0.730 | 0.781 | *0.796* | 0.740 | 0.738 | **0.837** | 0.788 | 0.731 |\\n\\n\\nCompared to OFT and COFT, our method (C2) achieves a higher CLIP-T (0.239 vs. 0.236), indicating better T2I alignment, and a higher LPIPS (0.788 vs. 0.740), reflecting greater diversity. Our method also maintains good fidelity, as shown by the DINO (0.634 vs. 0.633) and CLIP-I (0.787 vs. 0.788). Furthermore, our approach requires significantly fewer tuning parameters (0.75M vs. 11.75M).\"}",
"{\"title\": \"Response to Reviewer gvwM (Part 2)\", \"comment\": \"**Q3: Provide additional comparison with OFT and COFT on subject-driven generation tasks with 750 prompt-image pairs.**\", \"a3\": \"We adopt the experimental setup of OFT and Dreambooth, evaluating our method and the baseline on 30 concepts from Dreambooth. Images are generated using 25 text prompts, resulting in a total of 750 prompt-image pairs. The results are presented in the table below.\\n\\n\\n| | LoRA | LoHa | LoKr | DiffFit | BitFit | OFT | COFT | Ours (C1)| Ours (C2)| Ours (C3)|\\n|---------------|-------|-------|-------|----------|--------|-----|------|------|------|------|\\n| Fidelity | *0.697* | 0.693 | 0.693 | 0.622 | 0.571 | 0.656 | 0.652 | 0.594 | 0.652 | **0.707**|\\n| Diversity | 4.84 | 3.96 | 5.14 | 7.22 | *10.08* | 5.86 | 5.92 | **20.42** | 9.37 | 6.92 |\\n| T2I Alignment | 0.232 | 0.216 | 0.238 | 0.268 | 0.277 | 0.267 | 0.264 | **0.301** | *0.279* | 0.236|\\n| Params. (M) | 22.67 | 8.47 | 1.06 | 0.58 | *0.34* | 11.75 | 11.75 | **0.05** | 0.75 | 2.39|\\n\\n\\nCompared to other methods, our approach achieves the highest diversity and T2I alignment while requiring a minimal number of tuning parameters with the C1 configuration. Using the C3 configuration, our method attains the highest fidelity among all methods. Additionally, the C2 configuration achieves the second-best T2I alignment while maintaining strong concept fidelity.\\n\\nCompared to OFT and COFT, our method (C2) achieves better T2I alignment (0.279 vs 0.267) and diversity (9.31 vs 5.86) while maintaining similar fidelity (0.652 vs 0.656). Additionally, our method requires significantly fewer tuning parameters (0.75 vs 11.75), as the number of parameters in the atoms is much smaller compared to the rest of the model. In our experiments, the rank of OFT is set to 8, which is the default setting for PEFT.\\n\\n**Q4: What is the GPU memory footprint during fine-tuning for the proposed method?**\", \"a4\": \"We have provided the GPU memory requirements of various generative methods in the following table, with a training batch size of 1. Our method requires less GPU memory than most other approaches, primarily due to fine-tuning fewer parameters. With the same training batch size, the intermediate features are similar across methods, but fewer parameters lead to reduced GPU memory usage for storing backward gradients.\\n\\n| | LoRA | LoHa | LoKr | DiffFit | BitFit | OFT | COFT | Ours (C2) |\\n|---------------|-------|-------|-------|----------|--------|-----|------|------|\\n| Mem. (MB) | 8181 | 8027 | 7931 | 7359 | 5433 | 7601 | 7601 | 7333 |\\n\\n\\n**Q5: It is necessary to further increase the comparison methods and improve experimental settings.**\", \"a5\": \"We have presented comparisons with Adapter, SSF, and FacT in Q2, as well as with OFT and COFT on 750 prompt-image pairs in Q3. Our method has shown effectiveness compared to baseline approaches, consistent with the experimental results reported in our paper.\"}",
"{\"title\": \"Response to Reviewer gvwM (Part 1)\", \"comment\": \"We sincerely thank Reviewer gvwM for the constructive comments. We have incorporated these suggestions into the revised manuscript and will continue refining our paper.\\n\\n**Q1: Provide computational cost and time delay associated with decomposing convolutional atoms and atom coefficients.**\", \"a1\": \"**Computational time:** The decomposition process using the ISTA algorithm for atoms and atom coefficients takes about 1 second for each layer and 20 seconds for the whole model, with the code implemented on a GPU. This time is negligible compared to the training duration, which is approximately 60 minutes.\\n\\nAdditionally, we only need to perform sparse coding once for each pre-trained model. The decomposed coefficients can then be reused across all fine-tuning tasks, further reducing the computational cost.\\n\\n**Computational cost:** We estimate the computation cost in terms of FLOPs for solving the sparse coding problem: $\\\\min \\\\frac{1}{2} ||W - \\\\alpha D||_2^2 + \\\\lambda ||\\\\alpha||_1$, where we aim to obtain atom coefficients $\\\\alpha$ and atoms $D$ from the pre-trained weights $W$. Here $\\\\alpha \\\\in \\\\mathbb{R}^{c'c/k^2 \\\\times m}$, $D \\\\in \\\\mathbb{R}^{m \\\\times k^2}$, $W \\\\in \\\\mathbb{R}^{c' \\\\times c}$, $c'$ and $c$ are the numbers of input and output channels, $k$ is the kernel size, $m$ is the number of filter atoms.\\nSuppose ISTA requires $K$ iterations, the FLOPs required for this algorithm are $K(4c'cm+c'c+6mk^2)$.\\n\\nIn comparison, given the input data $x \\\\in \\\\mathbb{R}^{B \\\\times c'}$ with batch size $B$, the FLOPs required for one linear layer $z=Wx+b$, where $W \\\\in \\\\mathbb{R}^{c' \\\\times c}$ is $6Bc'c+4Bc+c'c+c$ which includes $2Bc'c+2Bc$ (forward pass), $4Bc'c+Bbc$ (backward pass) and $c'c+c$ (update parameters).\\n\\nSuppose we have $c'=c=512$, $k=4$, $B=64$, $m=9$, with one iteration the computational cost of the decomposition is approximately $9.7$ MFLOPs, while the computational cost of one linear layer is $101$ MFLOPs.\\n\\n**Q2: Provide comparison with recent methods, such as SSF, FacT, Adapter, Compactor, BinaryAdapter, etc.**\", \"a2\": \"Additional results are presented in the table below. Compared to SSF, FacT, and Adapter, our method achieves higher average accuracy while keeping the number of tuned parameters minimal.\\n\\n| Method | C100 | Cal. | DTD | Fl102 | Pets | SVHN | S397 | P.C. | Euro | R45 | Retin. | Cl./c | Cl./d | DM | KITTI | d./loc | d./ori | N/azi | N/ele | params. (M) | avg |\\n|---|------|------|-----|-------|------|------|--------|------|----------|--------|-------|-------|-------|------|-------|--------|--------|----------|----------|---------|-----|\\n| Adapter | **74.1** | 86.1 | 63.2 | 97.7 | 87.0 | 34.6 | 50.8 | 76.3 | 88.0 | 73.1 | 70.5 | 45.7 | 37.4 | 31.2 | 53.2 | 30.3 | 25.4 | 13.8 | 22.1 | 0.27 | 55.82 |\\n| FacT | 70.6 | 90.6 | 70.8 | 99.1 | 90.7 | 88.6 | **54.1** | 84.8 | **96.2** | 84.5 | 75.7 | **82.6** | **68.2** | 49.8 | 80.7 | **80.8** | 47.4 | **33.2** | **43.0** | 0.26 | 73.23 |\\n| SSF | 69.0 | 92.6 | **75.1** | **99.4** | 91.8 | 90.2 | 52.9 | **87.4** | 95.9 | 87.4 | 75.5 | 75.9 | 62.3 | **53.3** | 80.6 | 77.3 | 54.9 | 29.5 | 37.9 | 0.24 | 73.10 |\\n| Ours | 70.5 | **96.3** | 74.4 | **99.4** | **92.1** | **90.4** | 52.7 | 85.9 | 96. | **88.6** | **75.8** | 77.4 | 62.2 | 53. | **82.6** | 78.1 | **55.1** | 31.7 | 39.5 | **0.22** | **73.77** |\"}",
"{\"title\": \"Response to Reviewer MS4b\", \"comment\": \"We sincerely thank Reviewer MS4b for the supportive feedback. We have addressed the clarification in the revised manuscript and will continue to refine our paper.\\n\\n\\n**Q1: While the proposed decomposition and fine-tuning method is different, this method adjusts parameters in the big model. Comparatively, LoRA serves as a plug-in, which reduces the chance to hurt the capacity of pre-train models.**\", \"a1\": \"Our method is also used as a plug-in. As shown in Figure 2, our method maintains the parameters $F$ in the large models fixed. We only tune $\\\\Delta F=\\\\alpha \\\\times \\\\Delta D$, which is composed of a fixed coefficient $\\\\alpha$ and tunable filter atom $\\\\Delta D$.\\n\\nFurthermore, we demonstrate in our paper that our method more effectively preserves the capacity of pre-trained models. For instance, in Table 2, compared to the pre-trained model that generates the most diverse images with the highest T2I alignment, our method maintains high diversity and T2I alignment. In contrast, LoRA overfits to the fine-tuned concept, resulting in significantly lower diversity and T2I alignment.\\n\\n**Q2: Parameter fine-tuning often involves one large pre-trained model and many small tasks. Multiple LoRA's can be plug-in to one model even though there could be conflicts, to solve that scenario. How could this method achieve that?**\", \"a2\": \"As our method functions as a plug-in, it allows for separate $\\\\Delta D$ for multiple small tasks. Consequently, our approach can exhibit behavior similar to LoRA in handling multiple subtasks.\"}",
"{\"title\": \"Response to Reviewer GeoC\", \"comment\": \"We sincerely thank Reviewer GeoC for the constructive feedback. We have incorporated most of these suggestions into the revised manuscript and will continue to refine it to further clarify these points.\\n\\n\\n**Q1: What is the difference when using group convolution and point-wise convolution as filter atoms and coefficients?**\", \"a1\": \"The main difference between our approach and group convolution or point-wise convolution lies in its design for parameter-efficient fine-tuning, enabling our method to reconstruct the parameters of the pre-trained model. For instance, in a convolutional layer with $c'$ input channels and $c$ output channels, our method uses filter atoms and atom coefficients to represent the weight update $\\\\Delta F$ as $\\\\alpha \\\\times \\\\Delta D$. In contrast, group convolution and point-wise convolution are unable to represent such weight updates.\\n\\nIn our paper, we further demonstrate that our formulation can be extended to linear weights, a capability that cannot be achieved by group convolution or point-wise convolution.\\n\\n**Q2: Dicuss the memory usage and computation of the proposed method. How to obtain the total parameters of fine-tuning across different networks?**\", \"a2\": \"**Memory usage.** The GPU memory requirements for various generative methods are shown in the table below, with a training batch size of 1. Our method uses less GPU memory than most other approaches, mainly because it fine-tunes fewer parameters. While intermediate features are similar across methods for the same batch size, fewer parameters result in reduced GPU memory usage for storing backward gradients.\\n\\n| | LoRA | LoHa | LoKr | DiffFit | BitFit | OFT | COFT | Ours (C2) |\\n|---------------|-------|-------|-------|----------|--------|-----|------|------|\\n| Mem. (MB) | 8181 | 8027 | 7931 | 7359 | 5433 | 7601 | 7601 | 7333 |\\n\\n**Computational cost.** The FLOPs for our method is about $4Bc'c/k_c+4Bk_cc'+4Bc+mk_c^2$, where $B$ is the batch size, $c'$ and $c$ are number of input and output channels, $k_c$ is the size of atoms, $m$ is the number of filter atoms.\\nSuppose we have $c'=c=640$, $k_c=4$, $m=9$, $B=1$, our method requires only about $0.4$ MFLOPs.\\n\\n**Number of parameters.** Let's consider two types of layers as examples: convolutional layers with dimensions $(c', c, k, k)$, and attention layers with parameters $W_q$, $W_k$, $W_v$, $W_o$, which have dimensions $(d,d)$.\\nThe table below lists the PEFT fine-tuning methods along with their corresponding parameter counts. Suppose $c'=c=d=640$, $k=3$, the hyper-parameter for other approach is $r=8$, the hyper-parameters for our method are $k_c=4,m=9, m_1=3$.\\n\\n| | Conv. | Param. |Attn. | Param. |\\n|---------------|-------|-------|-------|-------|\\n| Original | c'ckk | 3,686,400 | 4d^2 | 1,638,400 |\\n| LoRA | c'kr + ckr | 30,720 | 8dr | 40,960 |\\n| LoHa | 2c'kr + 2ckr | 61,440 | 16dr | 81,920 |\\n| Lokr | c'k + ck + r^2 | 3,904 | 8d+4r^2 | 5,378 |\\n| OFT | c'ckk/r | 460,800 | 4d^2/r+4d | 207,360 |\\n| Ours ($D$ or $D_c$) | mk^2 | 81 | $4mk_c^2$ | 576 |\\n| Ours (+$\\\\beta$) | $mm_1k^2$ + $c'mm_1$ | 17,523 | $4mk_c^2$ | 576 |\\n\\nIn the table, \\\"Ours ($D$ or $D_c$)\\\" refers to our method with tuning filter atoms $D$ and atoms in the linear layer $D_c$, while \\\"Ours (+$\\\\beta$)\\\" indicates that, in addition to tuning filter atoms, we also incorporate overcomplete filter atoms and their coefficients $\\\\beta$.\\n\\n\\nCompared to other approaches, our method requires the least number of parameters. To determine the parameter counts reported in the paper, we enumerate all the model parameters and sum those that require gradients.\\n\\n**Q4. There are multiple important hyper-parameters. How to set these hyper-parameters?**\", \"a4\": \"We have conducted an ablation study on these hyper-parameters in Table 1.\\n\\nWe typically set $m=k^2$ to ensure that the reconstructed convolutional kernels from the filter atoms and atom coefficients are full rank, where $k$ represents the kernel size. For instance, for a convolutional layer of size $(c', c, k, k)$, when $k=3$, we set $m=9$.\\n\\nIn Section 4.2, we observe that increasing the hyperparameters $m_1$ and $k_c$ gradually improves accuracy, but also results in more parameters to tune. In our experiments, we find that $m_1=3$ performs well in most cases. For $k_c$, a value of $4$ works effectively for discriminative tasks, while $k_c=16$ is better suited for generative tasks.\"}",
"{\"summary\": \"This paper proposes to fine-tune large pre-trained models over the filter subspace by only adjusting filter atoms and keeping atom coefficients unchanged for parameter-efficient fine-tuning. To adapt to more complex tasks, the number of tunable parameters in filter subspace is increased to construct an overcomplete set of filter atoms by recursively decomposing each filter atom over another set of filter atoms. Experiments on multiple CNN network architectures across discriminative and generative tasks show the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.Clear motivation to fine-tune the spatial-only filter atoms for PEFT.\\n2.An interesting idea is to use the overcomplete filter atoms to improve performance.\\n3.Comprehensive experiments to evaluate the effectiveness of the proposed method.\", \"weaknesses\": \"1. Spatial-only convolution and cross-channel mixing are similar to group convolution and point-wise convolution. What is the difference when using group convolution and point-wise convolution as filter atoms and coefficients?\\n\\n2. The authors mainly consider the parameter usage by only fine-tuning filter atoms. I think memory usage and computation are important for PEFT, which should be discussed in this paper for further evaluating the effectiveness of the proposed method. In addition, how to obtain the total parameters of fine-tuning across different networks should be analyzed to improve the readability\\n3.There are multiple important hyper-parameters (e.g., $m, m_1, k_c$), which significantly affect the final performance. How to set these hyper-parameters.\", \"questions\": \"See the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for your detailed rebuttal. All my concerns are properly addressed. I will raise my score to 6.\"}"
]
} |
E5DYpUWsES | Manifold K-means with $\ell_{2,p}$-Norm Maximization | [
"Fangfang Li",
"Quanxue Gao",
"Qianqian Wang",
"Cheng Deng",
"Xiaoke Ma",
"Jing Li"
] | Although a variety of different methods have emerged in the field of clustering, K-means still occupies an important position, and many advanced clustering methods even rely on the K-means to achieve effective cluster detection. However, the sensitivity of K-means to the selection of the initial cluster center and its limited ability to handle nonlinear separable data somewhat restrict its clustering performance. In order to overcome the limitations of K-means, we draw inspiration from manifold learning and redefine K-means as a manifold K-means clustering framework. This framework supports various types of distance matrices, thus facilitating the efficient processing of nonlinear separable data. A unique advantage of this approach is that it does not require the calculation of the cluster center, while it maintains the consistency between manifold structure and cluster labels. Additionally, we highlight the significant role of the $\ell_{2,p}$-norm; by maximizing the $\ell_{2,p}$-norm, we can ensure the balance of classes in the clustering process, which is also supported by theoretical analysis. The results from extensive experiments across multiple databases substantiate the superiority of our proposed model. | [
"Clustering",
"Manifold Learning",
"K-means",
"$\\ell_{2",
"p}$-Norm"
] | https://openreview.net/pdf?id=E5DYpUWsES | https://openreview.net/forum?id=E5DYpUWsES | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"h43XmPpRYe",
"ezVctd6Q3A",
"TeIEHUybbQ",
"JmaddP7Qv5",
"Esi59wt05U",
"9LXWtd9nk9"
],
"note_type": [
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1733164080845,
1730459626142,
1730382353181,
1730249790282,
1737624661680,
1730551203658
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission133/Reviewer_zgDc"
],
[
"ICLR.cc/2025/Conference/Submission133/Reviewer_LkpW"
],
[
"ICLR.cc/2025/Conference/Submission133/Reviewer_BryM"
],
[
"ICLR.cc/2025/Conference/Submission133/Reviewer_zgDc"
],
[
"ICLR.cc/2025/Conference/Submission133/Authors"
],
[
"ICLR.cc/2025/Conference/Submission133/Reviewer_9cF4"
]
],
"structured_content_str": [
"{\"title\": \"Thanks for Associate Program Chairs\", \"comment\": \"Dear Associate Program Chairs,\\n\\nThank you for your recognition of our contributions.\\n\\nYour suggestion is very meaningful, and I will provide more specific recommendations in future reviews.\\n\\nThe authors did not provide any feedback based on my and other reviewers' comments, as well as your suggestions. Therefore, I believe the authors may have abandoned the article.\\n\\nBest regards.\"}",
"{\"summary\": \"This paper proposes a manifold k-means method, which reformulates the k-means to the manifold learning, and then plugged a balanced regularized term into it. The experimental results show its effectiveness.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The experimental results are good.\\n2. The presentation is good and easy to follow.\", \"weaknesses\": \"1. My major concern is about its novelty. The relationship between k-means and manifold learning or spectral clustering has been widely studied in previous works, e.g. [1] and [2]. Section 3 (i.e., Rethinking for K-means) of this paper seems not to provide any new insight about the k-means. In addition, the balance regularized term seems a simple extension of previous works. Previous works, e.g.[3], show that when $p=1$, it can lead to a balanced result. This paper seems only an extension of $p$, which is not significant enough for publication in ICLR.\\n\\n2. Since the paper proposes a balance regularized term, in the experiments, they should also compare with some state-of=the-art balanced clustering methods to show the effectiveness.\\n\\n3. I'm also interested in the case that the ground truth is imbalanced. If in this case, how does the proposed method perform? It would be better to conduct the experiments to discuss this case.\\n\\n[1] Centerless Clustering, in IEEE TPAMI 2023.\\n[2] Efficient Clustering Based On A Unified View Of K-means And Ratio-cut, in NeurIPS 2020.\\n[3] Balanced k-Means and Min-Cut Clustering, in Arxiv 2014.\", \"questions\": \"Please see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"In literature, k-means is limited to the sensitivity to the selection of the initial cluster center and nonlinear separable data. To overcome the two issues, the authors draw inspiration from manifold learning and propose a manifold K-means clustering framework.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method seems effective according to experiment results of Table 1.\", \"weaknesses\": \"1. In line 51-52, the authors point that some methods overlook the alignment of data geometry with labels. Could the authors explain this more clearly?\\n\\n2. The proposed objective is Eq. (16) which is similar to the kernel k-means objective (can find in Section 2.1 of [1]). Could the authors discuss their similarities and differences?\\n\\n3. The first two contributions in Introduction are duplicated, so do the last two.\\n\\n4. In table 2, KNN is used to construct the distance matrix. Since KNN is a supervised method and this paper focus on clustering, the proposed 'our-KNN' method should be further clarified.\\n\\n5. Currently, the words in figures are too small and should be enlarged.\\n\\n6. The expression should be polished. For example, \\\"equation 18\\\" in line 299 should be \\\"Equation (18)\\\" or \\\"Eq. (18)\\\", etc. \\n\\n7. Convergence analysis should be provided.\\n\\n[1] Liu J, et al. Optimal Neighborhood Multiple Kernel Clustering with Adaptive Local Kernels, TKDE.\", \"questions\": \"Please see Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The author proposes (1) rewriting $k$-means clustering into a manifold paradigm, (2) utilizing discrete clustering labels $Y$ , (3) adding $l_{2,p}$-norm constraints to $Y$ to promote cluster balance, (4) deriving the algorithm and detailing the optimization process, and (5) conducting a series of experiments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The article is easy to read.\\n2. The proofs of two simple theorems are complete and correct.\\n3. The writing style is generally standard and free of obvious errors.\\n4. The motivation behind the paper is good, but it falls short of the points mentioned in the contributions by the authors.\", \"weaknesses\": \"1. Using discrete labels $Y$ is not an innovative work, and many clustering algorithms relax discrete labels to the continuous domain to make the learning process more effective.\\n2. The author proposes using $l_{2,p}$-norm constraints on the discrete label matrix $Y$ to promote cluster balance. However, (1) applying $l_2$-norm directly could achieve this goal, (2) this is achieved by adding a regularization term to enforce cluster balance, which is not a model-driven effect and can be applied to any clustering algorithm, (3) in Fig.1, only experiments within the range of 0.1 to 1 were performed, without testing values greater than 1.\\n3. The Pendigits dataset is clearly a dataset with very balanced clusters. When $\\\\lambda$ is high, it should fit this characteristic, yet the clustering performance significantly decreases. Please explain the reason.\\n4. One of the main contributions claimed by the author is cluster balance; however, no information is provided regarding the characteristics of each cluster in the experimental datasets.\\n5. From Table 1, it can be seen that the default distance measure used in the comparison algorithms is Euclidean distance without any hyperparameters. However, the proposed algorithm only shows performance improvement on one dataset using Euclidean distance measure when requiring hyperparameter tuning, showing almost no improvement on all other datasets. This suggests that the discretization of $Y$ did not play a substantial role. The improvement in clustering performance mainly stems from changes in the distance measure, which can also be applied to other clustering algorithms.\\n6. From the loss function and clustering performance convergence plots, it can be observed that convergence is essentially achieved after one iteration. Therefore, in subsequent iterations, does the discrete $Y$ remain unchanged? How can it be proven that the proposed algorithm effectively learns during the optimization process?\", \"questions\": \"As the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}",
"{\"summary\": \"This paper introduces a new manifold-based K-means clustering framework that addresses limitations in standard K-means, particularly sensitivity to initial cluster centers and difficulty with nonlinearly separable data. By leveraging manifold learning, the authors redefine K-means to process complex geometric structures directly, without the need to compute a centroid matrix. The paper also introduces the \\u21132,p-norm, which is maximized to balance class distributions within the clustering process. Extensive experiments and theoretical analyses demonstrate that the proposed manifold K-means method outperforms traditional and kernel-based K-means on various datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.The paper presents an approach by redefining K-means in a manifold framework, leveraging the \\u21132,p-norm to balance class distributions.\\n2.Methodology and theoretical basis are well-structured, with explanations supporting the novel aspects of the method.\\n3.The approach has potential in clustering applications that require handling complex, nonlinear data.\", \"weaknesses\": \"1.The paper does not fully clarify how the proposed method theoretically differs or improves upon existing manifold-based clustering methods.\\n2.While the method is promising, the lack of real-world applications or case studies makes it difficult to gauge its practical impact.\\n3.Experiments are limited to controlled datasets, leaving questions about performance and scalability in high-dimensional, real-world scenarios.\\n4.Some terminology and symbols lack clarity, especially in the theoretical sections, which may hinder readability.\", \"questions\": \"1.How does the computational complexity of this method compare to traditional and kernel-based K-means?\\n2.Has a sensitivity analysis been performed on the \\u21132,p-norm parameter? Does the model perform consistently across parameter variations?\\n3.How does the manifold structure improve clustering accuracy or interpretability, compared to other manifold clustering methods?\\n4.Could the authors provide evidence supporting the claim that the method maintains consistency between manifold structure and clustering labels?\\n5.What are the specific advantages of this approach over K-means++ or other initialization-robust methods in complex clustering tasks?\\n6.Have the authors validated the scalability of this approach on higher-dimensional, real-world datasets, where nonlinear structures may be more intricate?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
E4roJSM9RM | Unveiling the Secret of AdaLN-Zero in Diffusion Transformer | [
"Jie Zhu",
"Mingyu Ding",
"Boqiang Duan",
"Leye Wang",
"Jingdong Wang"
] | Diffusion transformer (DiT), a rapidly emerging architecture for image generation, has gained much attention. However, despite ongoing efforts to improve its performance, the understanding of DiT remains superficial. In this work, we delve into and investigate a critical conditioning mechanism within DiT, adaLN-Zero, which achieves superior performance compared to adaLN. Our work studies three potential elements driving this performance, including an SE-like structure, zero-initialization, and a “gradual” update order, among which zero-initialization is proved to be the most influential. Building on this insight, we heuristically leverage Gaussian distributions to initialize each condition modulation, termed adaLN-Gaussian, leading to more stable and effective training. Extensive experiments following DiT on ImageNet1K demonstrate the effectiveness and generalization of adaLN-Gaussian, e.g., a notable improvement of 2.16% in FID score over adaLN-Zero. | [
"Diffusion transformer",
"zero-initialization",
"image generation"
] | Reject | https://openreview.net/pdf?id=E4roJSM9RM | https://openreview.net/forum?id=E4roJSM9RM | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wHSJTnEMSd",
"rdYKQSBvbl",
"q7ZYtEW5I4",
"pm1lQHiHKc",
"pf9tJ6z8zr",
"nODMRz9ZWU",
"mfzIQSN2pP",
"mVlnS0r3DL",
"dvAQuk22lA",
"Zgrn4mqezJ",
"SO5843gPM0",
"MyUU3WN0XX",
"KnDTzcU2uw",
"J7P4Pg791g",
"GqpIWnF2z9",
"GJkyR7DdJv",
"DH6B5bVeN0",
"CHSS8Bh5Mh",
"C0KvtTFMLi",
"B28uLhoOdf",
"ARFQMBSQS8",
"7bmXvLjMZV",
"43wq6pXwfI",
"2pYdKAWw9C"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732112490847,
1732112402433,
1732161126124,
1730714005949,
1734347753551,
1732265488704,
1732856164657,
1730358969578,
1732112540508,
1733132681550,
1732111959224,
1732193151695,
1729689775723,
1732112347351,
1733132920137,
1732867226592,
1732112015115,
1730293578339,
1737523570351,
1733132820210,
1733295997880,
1732112103459,
1733132763797,
1733132571185
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Reviewer_nDDD"
],
[
"ICLR.cc/2025/Conference/Submission3334/Reviewer_ZpLc"
],
[
"ICLR.cc/2025/Conference/Submission3334/Area_Chair_wdRv"
],
[
"ICLR.cc/2025/Conference/Submission3334/Reviewer_p8jh"
],
[
"ICLR.cc/2025/Conference/Submission3334/Reviewer_ZpLc"
],
[
"ICLR.cc/2025/Conference/Submission3334/Reviewer_p8jh"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Reviewer_nDDD"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Reviewer_KCHJ"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
],
[
"ICLR.cc/2025/Conference/Submission3334/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer nDDD (1/2)\", \"comment\": \"**Q1:** The contribution of this paper is limited; the only significant advancement is the introduction of adaLN-Gaussian\\n\\n**A1:** **We respectfully disagree with the reviewer's comment.**\\n\\nOur paper is not merely about a method proposal but, more importantly, an analytical study. Specifically, besides the introduction of adaLN-Gaussian, our detailed analysis about adaLN-Zero also can not be ignored because it unveils the reason why adaLN-Zero outperforms adaLN, which enhances the community's understanding. Moreover, our analysis is the basis of the introduction of adaLN-Gaussian. It would be inappropriate to neglect the contribution of our analysis.\\n\\n**In contrast, our analysis is acknowledged by other reviewers.** For example, it is acknowledged by Reviewer [p8jh] that we provide sufficient evidence to support the exploration inside the adaLN-zero. And Reviewer [KCHJ] also comments that our analysis provides some inspiring conclusions.\\n\\n**Q2:** The performance gains are also limited.\\n\\n**A2:** **We may not agree with the reviewer's comment about the performance gains of our methods.**\\n\\nThe improvement of our method is acknowledged by reviewer [ZpLc] who comments that our work is simple and effective, achieving remarkable performance.\\n\\nIf the reviewer thinks that our performance gains, e.g., **2.16 FID in 400K**, are limited, how does the reviewer think about the improved performance of SiT[1] accepted by ECCV 2024? For example, SiT-XL/2 outperforms DiT-XL/2 by **2.3 FID in 400K** in Table 1 in its paper (https://arxiv.org/pdf/2401.08740). (Our method and SiT both use the same structure of DiT.)\\n\\nWe sincerely appreciate the reviewer's time and effort, and we have no intention of offending the reviewer. However, to some extent, these comments seem to be unreasonable.\\n\\nWe hope that the reviewer could reevaluate the contributions of our work. As acknowledged by reviewer [ZpLc], our work is simple and effective (with only one line code replaced). Moreover, our method in Table 4 of our paper has shown great generalization to various DiT variants and DiT-based models.\\n\\n[1] Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. ECCV2024\\n\\n**Q3:** The experiments conducted are insufficient. Training for 800k iterations may not be adequate for the convergence of the DiT model. Given the 2% improvement in the current results at 400k iterations, it raises doubts about whether Gaussian initialization will outperform zero initialization in terms of final performance.\\n\\n**A3:** We kindly remind the reviewer that in this paper we do not alter the learning algorithm and any structure of DiT model, but only initialize its condition mechanism differently. **Therefore, theoretically, if there is no limit on training steps and excluding local optimum, the finally converged performance of adaLN-Zero would be similar to the performance of adaLN-Gaussian because the model capacity is the same.** However, as reviewer [KCHJ] points out, different initialization methods mainly affect the convergence speed. Under the same training steps, our method allows DiT model to converge faster. For example, adaLN-Gaussian achieves an FID of 14.84 after 600K training steps, while adaLN-Zero requires 800K steps to reach a similar FID of 14.73. When further extending the training time, adaLN-Gaussian achieves an FID of 10.83 after 1.5M training steps while adaLN-Zero achieves requires around 2.4M steps to reach a similar FID of 10.67.\\n\\n\\n**Q4:** Conducting additional experiments on transformer-based models such as SiT and PixArt-alpha (text-to-image).\\n\\n**A4:** We thank the reviewer's suggestion. SiT and PixArt-alpha are both very excellent work in the community. Due to the limited time, high GPU computing demand (64 V100 & 26 days), and large amount of internal data used in PixArt-alpha, we select to perform additional experiments on transformer-based SiT. We use the best-performing SiT-XL/2 training on ImageNet1K 256x256 for 50K. adaLN-Zero produces 71.90 for FID, while adaLN-Gaussian yields 67.15 FID, significantly outperforming adaLN-Zero and demonstrating the effectiveness of our method. Moreover, we also perform experiments using DiT-XL/2 training on other datasets including Tiny ImageNet, AFHQ, and CelebA-HQ for 50K. We report all the results below. These results further show the effectiveness and generalization of adaLN-Gaussian.\\n\\n| Dataset | Tiny Imagenet| AFHQ | CelebA-HQ | ImageNet1K (SiT-XL/2)\\n|----------|----------|----------|----------| ----------|\\n| adaLN-Zero | 37.11 | 13.52 | 8.01 | 71.90 |\\n| adaLN-Gaussian | 36.07 | 12.58 | 7.54 | 67.15|\"}",
"{\"title\": \"Response to Reviewer KCHJ (2/2)\", \"comment\": \"**Q4:** Have you analyzed the distribution of parameters of other parts of DiT? Do they also end up being Gaussian? How were these parameters initialized?\\n\\n**A4:** Yes, we have analyzed the weight distribution of other parts of DiT including Attention and Mlp in DiT Block, PatchEmbed, LabelEmbedder, and TimestepEmbedder in Appendix A.7. In our experiments, we find that all of their weight distributions except PatchEmbed end up being Gaussian-like.\\n\\nFor the last question (How were these parameters initialized?) In DiT code, Attention and Mlp in DiT Block, and PatchEmbed are with Xavier uniform. LabelEmbedder and TimestepEmbedder are initialized with normal distribution.\\n\\nNaturally, we can consider Gaussian initializations for these modules as well except PatchEmbed to accelerate training. For example, we could uniformly use Gaussian initialization for Attention and Mlp in DiT Block. We set the mean to 0 and use several choices for std such as 0.001, 0.01, 0.02, 0.03, and 0.04. We use DiT-XL-2 and train for 50K steps for simplicity. The results are shown below.\\n\\n| Std | Default| 0.001 | 0.01 | 0.02 | 0.03 | 0.04 |\\n|----------|----------|----------|----------| ----------|----------| ----------|\\n| FID | 76.21 | 92.09 | 85.28 | 80.89 | 91.21 | 98.50 |\\n\\nWe see that the performance is inferior to the default initialization. Therefore, more precise hyperparameter tuning may be needed for these modules to further improve the performance, which we leave as future work.\\n\\nWe hope our response will clarify the reviewer's confusion and alleviate the concern. And we sincerely hope to obtain support from the reviewer.\"}",
"{\"comment\": \"I do not neglect the detailed analysis of adaLN-Zero presented in this manuscript, which I have summarized in the Strengths section and considered in my final review. My primary concern is that, as you mentioned, the theoretically converged performance of adaLN-Gaussian proposed in this paper is expected to be similar to that of adaLN-Zero; however, there is no experimental support for this claim. You referenced SiT, but Table 1 in SiT paper shows that SiT's performance improvement of FID at 400k steps is significantly better than yours, and they provide evidence indicating that their final results are superior. The 400k experiments conducted in the current manuscript are insufficient to support your conclusion.\\n\\nIf adaLN-Gaussian can demonstrate that it requires fewer iterations to achieve a FID of 2.27, as seen with adaLN-Zero, it would substantiate your claim of reduced convergence time, which would be a meaningful contribution. In my research on DiT, I have observed that early convergence does not always lead to better final results; ultimately, what matters is the performance of the model at convergence, which raises my concerns.\\n\\nIn this work, adaLN-Gaussian represents the core contribution. The research on the differences between adaLN-Zero and adaLN is aimed at developing adaLN-Gaussian. Therefore, without a comprehensive demonstration of adaLN-Gaussian, I believe the contribution is quite limited. Parameter initialization for adaLN is indeed an intriguing topic that could provide valuable insights into the rapidly evolving field of image and video generation. Given its importance, I hope you can refine the manuscript before publishing it.\\n\\nI would be happy to continue this discussion with you.\"}",
"{\"summary\": \"This paper investigates three mechanisms of adaLN-Zero, including the SE-like network structure, zero-initialization, and the weight update order. Based on the analysis of adaLN-Zero, this work proposes an improved initialization method, adaLN-Gaussian, which utilizes Gaussian distributions to initialize the weights of each condition modulation.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"The proposed adaLN-Gaussian initialization is simple and effective, achieving remarkable performance.\", \"weaknesses\": \"1. The overall logic of the paper is somewhat disorganized, and there is a logical gap between the analysis of adaLN-Zero and adaLN-Gaussian. Could the authors provide a more detailed explanation of why Gaussian distributions are used to initialize weights?\\n2. This work lacks mathematical analysis; all conclusions are drawn from experimental results and statistics, and the authors only conducted experiments using the ImageNet dataset. I think more extensive and general experiments are needed to validate the effectiveness of adaLN-Gaussian.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"This paper analyzes the DiT architecture of image generation, and in particular the adaLN-Zero conditioning. It identies the zero initialization as critical for performance, and proposes an alternative Gaussian initialization, which is found to stabilize and improve training.\\nReviewers appreciate the presentation, and improved performance due to adaLN-Gaussian, \\nThere main concerns relate to the scope of the contribution, analysis and experiments.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided an extensive rebuttal and updates to the manuscript. The reviewers considered the rebuttal which addressed their concerns only partially, and the final recommendations were split between two marginal accept recommendations, and two (marginal) negative recommendations. Beyond the analysis of DiT initialization, the main contribution is empirical exploration of the impact of Gaussian weight initialization, which is observed to accelerate convergence although the paper does not extensively analyze the convergence speed across different initialization methods relative to the final point of convergence.\"}",
"{\"comment\": \"Thanks for you detailed answers! My questions have been addressed.\\nAnd I consider such an interesting work should be appeared in ICLR 2025.\\nHowever, I cannot judge whether the proposed gaussian initializing is a trick or can be popularily applied in practical DiT-based applications and scaled training. \\nSo I keep the ratings (6).\"}",
"{\"comment\": \"Thank you for your response. However, the theoretical analysis I requested is still missing, and I remain unclear about the necessity of using Gaussian distribution for weight initialization. Additionally, in the provided extended experiments, training for only 50K steps is insufficient to demonstrate the effectiveness of adaLN-Gaussian. Can its advantages be maintained over longer training periods?\\n\\nDue to the lack of theoretical analysis and incomplete experiments, I am adjusting my rating to 5. I would be happy to continue the discussion if the authors provide more evidence and experiments.\"}",
"{\"summary\": \"This work focuses on an overlooked part in the diffusion transformer - the zero initialization of adaptive LayerNorm.\\nAuthors start from the similarity between SENet with DiT's adaLN, and develop some variants from adaLN.\\nAfter several discussions about the gradient update over the weight of adaLN and the summarization of the benefits from adaLN-zero, authors propose to leverage Gaussian distributions to initialize the adaLN.\\nThe exps over IN-1K and DiT-based backbones are sufficiently perferformed.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The motivation is very good (the overlooked part in DiT) and the issue in this work in quite interesting.\\n\\n2. Clear clarity and good presentation.\\n\\n3. Provide sifficient evidence to support the exploration inside the adaLN-zero, not only the gradient update but also the varients of adaLN.\\n\\n4. Good and fair experimentss to support the proposed method. Authors provide relatively big scale exps on IN-1K 512X512 which is expensive, and the backbones are mainly based on large scale DiT. Besides authors also evaluate the effectiveness on other DiT-based backbones.\", \"weaknesses\": \"1. The start point is very unclear for me. How can you find the similarity between adaLN and SE archtecture\\uff1f\\nI consider that this start point should be better explained and provide a structure figure of SE archtecture.\", \"questions\": \"1. How can you find the similarity between adaLN and SE archtecture\\uff1f This is an intersting point. Hope that authors can provide some principles but not intuitions.\\n\\n2. As shown in Figure2, the performance of Gaussian initialization is a U-shaped trending. Could you please some analysis about this trending? Why the large std can brings a relatively bad results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer nDDD (2/2)\", \"comment\": \"**Q5:** How do you summarize the differences between AdaLN-Zero and adaLN as the three points?\\n\\n**A5:** Essentially, our summarized three points reflect the roles of the two additional steps of adaLN-Zero compared to adaLN: 1) introducing scaling element $\\\\alpha$ 2) zero-initializing corresponding linear layers.\\n\\nThe first step changes the structure of DiT model. Through our closer observation of the formed structure, we find that the modified structure shares similarities with the SE module (The first point).\\n\\nOur last two points are derived from the second step. First of all, we know that initialization has a basic role, namely, determining the initial value of the module weight (The second point). Particularly, for zero initialization, as previous work[2][3] suggested, it additionally can nullify certain output pathways at the beginning of training, thereby causing the signals to propagate through identity shortcuts. A direct way to reveal the influence of this shortcut behavior on model optimization is to analyze through the lens of gradient update (The third point).\\n\\nFinally, our step-by-step and decoupling experiments and analysis show that these three points are all related to the improved performance of DiT model.\\n\\n[2] Accurate, large minibatch sgd: Training imagenet in 1 hour. Arxiv2017\\n\\n[3] Scalable Diffusion Models with Transformers. ICCV2023\\n\\n**Q6:** In Fig. 4, regardless of the type of initialization used, the weight distribution converges to a similar state after a certain number of training iterations. Why the choice of initialization has such a significant impact on performance?\\n\\n**A6:** We think there may be two reasons. Firstly, the results presented in Figure 5 of the DiT paper (https://arxiv.org/pdf/2212.09748), such as the significant performance improvement of adaLN-Zero over adaLN in FID, indicates that the condition mechanism is essential for achieving superior metric outcomes. Hence, leveraging a more appropriate initialization allows condition mechanism to learn better and thereby help DiT model to obtain superior outcomes.\\n\\nSecond, though the weight distribution converges to a similar state after a certain number of training iterations, e.g. 400K in Fig 4 where their distributions are very close to each other, the element-wise discrepancy may be still large. We average the absolute values \\u200b\\u200bof the differences between each element in all weight matrices corresponding to adaLN-Zero and adaLN-Mix. The averaged element-level value is 0.05, which indicates that there is still a large element-level value shift in weight matrice. This may be the reason to the performance gap between adaLN-Zero and adaLN-Mix. We also calculate the average difference of the whole model weights between adaLN-Zero and adaLN-Mix. The result is 0.051, indicating there is also a large element-level value shift in the whole model.\\n\\n**Q7:** A few minor suggestions. 1) Simplify Figure 6 by merging the subgraphs to enhance information density. 2) Rotate Figure 2 to improve the visibility of the text.\\n\\n**A7:** We thank the reviewer's suggestions. We follow the reviewer's suggestion to rotate Figure 2, which indeed significantly improves the visibility of the text. As for simplifying Figure 6 by merging the subgraphs, we have tried to do so, e.g., by merging 4 subgraphs. However, while enhancing information density, the resulting figure is not clear to see the distribution of each block. This may not match the purpose of showing Figure 6 where we aim to demonstrate that $W^{L}_{alpha}$ in each block of DiT exhibits a Gaussian-like distribution.\"}",
"{\"title\": \"Responses to Reviewer ZpLc about theoretical analysis and more experiments (2/2)\", \"comment\": \"Further, to demonstrate that adaLN-Zero indeed forms a Gaussian-like distribution, we employ KL-Divergence to measure the distance between its distribution and a true Gaussian. Specifically, we use the weights of adaLN-Zero at 50K steps to compute its mean and standard deviation. These parameters are then used to initialize a Gaussian distribution, from which we sample the same number of points as adaLN-Zero. Finally, the KL distance between the two sets of sampled points is calculated using the nearest neighbor nonparametric estimation method, as detailed below.\\n\\n$D_{\\\\text{KL}}(P \\\\| Q) \\\\approx \\\\frac{1}{n} \\\\sum_{i=1}^{n} \\\\log \\\\frac{\\\\rho_i}{\\\\nu_i} + \\\\log \\\\frac{m}{n-1}$\\n\\n$m$ and $n$ are the number of sample points. $\\\\rho_i$ represents the nearest neighbor distance of point $x_{i}$ in $P$. $\\\\nu_i$ is similar.\\n\\nThe calculated KL-Div is 0.065. Typically, the closer the two distributions are, the smaller the KL-Div is. If the two distributions are the same, the KL-Div is 0. Therefore, the calculated result demonstrates that adaLN-Zero indeed formulates Gaussian-like distribution.\\n\\nFor the confusion of the necessity of using Gaussian distribution for weight initialization, we feel sorry for this. We think there may be two reasons for this necessity. Firstly, the results in Figure 5 of the DiT paper (https://arxiv.org/pdf/2212.09748), e.g., the significant improvement of adaLN-Zero over adaLN in FID, indicate that the condition mechanism is essential for achieving superior metric outcomes. Therefore, selecting a suitable initialization for the condition mechanism could be necessary. Simultaneously, this is also an easily achieved and low-cost method to help with training. Secondly using Gaussian is motivated by our statistics results: we find that weights in the conditioning mechanism, though zero-initialized, always transition to a Gaussian-like distribution. Therefore, using a suitable Gaussian initialization could better expedite this distribution shift.\\n\\nFurther, we follow your advice to extend our training steps. However, we may need to explain that in fact 50K iterations (batch size=256) is relatively suitable for Tiny Imagenet, AFHQ, and CelebA-HQ because the number of their images is small, only 100K, 14K, and 30K, respectively, compared to ImageNet1K. We feel sorry that we fail to update these information timely and sorry for the confusion this training step setting brings to you.\\n\\nBut we are willing to further extend the training steps. We extend an additional 50K for Tiny Imagenet and 150K for ImageNet1K (SiT-XL/2). The results are presented below where our method adaLN-Gaussian still outperforms adaLN-Zero when the training step is longer.\\n\\n| Dataset | Tiny Imagenet | ImageNet1K (SiT-XL/2) |\\n|----------|----------|----------| \\n| adaLN-Zero | 32.45 | 31.07 |\\n| adaLN-Gaussian | 31.64 | 27.98 |\\n\\nWe do not extend more training steps for Tiny Imagenet, AFHQ, and CelebA-HQ as our experiments show this will lead to overfitting, an increase in FID. For example, an additional 2K training steps for CelebA-HQ increases the FID from 7.54 to 11.20.\\n\\nIn addition, we also extend the training steps of DiT to further show the effectiveness of our method under longer training time. The results are shown below. We find that adaLN-Gaussian produces 10.09 for FID in 2300K training steps, outperforming adaLN-Zero in 2352K training steps (10.67 FID in its paper). Moreover, using the same cfg=1.5 as adaLN-Zero, adaLN-Gaussian achieves 2.27 FID in 5400K training steps, less than 7000K in adaLN-Zero which is the longest training steps in DiT paper. These results further demonstrate the superiority of adaLN-Gaussian over adaLN-Zero.\\n\\n| Dataset | ImagnetNet (cfg=1) | ImagnetNet (cfg=1.5) | \\n|----------|----------|----------|\\n| adaLN-Zero | 10.67 (2352K) | 2.27 (7000K) |\\n| adaLN-Gaussian | 10.09 (2300K) | 2.27 (5400K) |\\n\\nWe hope our response above could address your concerns and help you reassess our work. We would greatly appreciate it if you could consider kindly raising your rating! Thank you once again for your patience.\"}",
"{\"title\": \"General Response\", \"comment\": [\"We sincerely appreciate all reviewers\\u2019 time and efforts in reviewing our paper. We are glad to find that reviewers recognized our contributions:\", \"**Motivation.** Very good motivation and quite interesting work [p8jh]; Easy to follow [p8jh]; Clear presentation[nDDD]\", \"**Method.** Simple and effective[ZpLc]; Remarkable performance[ZpLc]\", \"**Experiments and analysis.** Good, fair, reasonable, and sufficient experiments [p8jh, KCHJ, nDDD]; Inspiring conclusions[KCHJ]; Detailed analysis [nDDD]\", \"And we also thank all reviewers for their insightful and constructive suggestions, which helps a lot in further improving our paper.\", \"In addition to the pointwise responses below, we summarize some supporting experiments and analysis added in the rebuttal according to reviewers\\u2019 suggestions.\", \"### New experiments and analysis:\", \"Adding more experiments on other datasets including Tinyimagenet, AFHQ, and CelebA-HQ, and on another model SiT[1] to show the effectiveness of our method\", \"Providing more analysis such as about a U-shaped trending for Gaussian initialization in Table 2\", \"Using Gaussian initializations for other module parts of DiT\", \"Longer training steps for DiT-XL/2, e.g., 5.4M, to achieve 2.27, faster than adaLN-Zero which uses 7M steps, the longest steps in DiT paper\", \"The additional experiments and modifications to the language have been delivered in our paper and reflected in the revised version. We hope our pointwise responses below could clarify all reviewers\\u2019 confusion and alleviate all concerns. We thank all reviewers\\u2019 time again.\", \"[1] Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. ECCV2024\"]}",
"{\"title\": \"New Response to Reviewer nDDD\", \"comment\": \"We sincerely thank the reviewer for the comment and are very delighted to engage in this discussion!\\n\\nWe understand the reviewer\\u2019s concerns and still continue our training and evaluation, which may require some time. We will report our results promptly during the discussion period as requested by the reviewer.\"}",
"{\"summary\": \"This paper investigates AdaLN-Zero within the DiT architecture and proposes three potential reasons for the performance difference between AdaLN-Zero and AdaLN: the incorporation of an SE-like structure, the use of an effective zero-initialized value, and a gradual weight update order. The authors conduct a series of analyses, ultimately concluding that the zero initialization is the most significant factor. Building on these findings, they introduce a method that leverages Gaussian distributions to initialize each condition modulation, referred to as adaLN-Gaussian. Extensive experiments demonstrate an improvement of 2.16% in FID on ImageNet1K.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The presentation is clear, three potential reasons for the performance difference between AdaLN-Zero and AdaLN are studied in detail, and the designed experiment is reasonable.\", \"weaknesses\": \"1. The contribution of this paper is limited; the only significant advancement is the introduction of adaLN-Gaussian, which leverages Gaussian distributions to initialize each condition modulation in the DiT architecture. The performance gains are also limited.\\n2. The experiments conducted are insufficient. Training for 800k iterations may not be adequate for the convergence of the DiT model. Given the 2% improvement in the current results at 400k iterations, it raises doubts about whether Gaussian initialization will outperform zero initialization in terms of final performance. Furthermore, to demonstrate the broader applicability of this method, I recommend conducting additional experiments on transformer-based models such as SiT[1] and PixArt-alpha[2] (text-to-image).\\n\\n[1] Ma, Nanye, et al. \\\"Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers.\\\" arXiv preprint arXiv:2401.08740 (2024).\\n[2] Chen, Junsong, et al. \\\"Pixart-$\\\\alpha $: Fast training of diffusion transformer for photorealistic text-to-image synthesis.\\\" arXiv preprint arXiv:2310.00426 (2023).\", \"questions\": \"1. How do you summarize the differences between AdaLN-Zero and adaLN as the three points?\\n2. In Fig. 4, it appears that regardless of the type of initialization used, the weight distribution converges to a similar state after a certain number of training iterations. This raises the question of why the choice of initialization has such a significant impact on performance.\\n3. A few minor suggestions: 1) Simplify Figure 6 by merging the subgraphs to enhance information density. 2) Rotate Figure 2 to improve the visibility of the text.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer KCHJ (1/2)\", \"comment\": \"**Q1:** From line 129~130, we know that all alpha are initialized to 0. Can you expalin line 209: why are gama1, beta1, gama2, beta2 also zero?\\n\\n**A1:** Why are gama1, beta1, gama2, beta2 also zero? The answer is that in adaLN and adaLN-Zero, as shown in Fig 2 of our paper, the weights of the linear layer that produces gama1, beta1, gama2, and beta2 are initialized to zero at the beginning. Sorry for the confusion. We have revised the corresponding part of the paper to make it clearer.\\n\\n**Q2:** Some of the inferences are intuitive, and it would be better if there were more rigorous analysis. e.g. line 448\\uff1aMoreover, this moment should be neither too late, ... ,nor too early, as there may be minimal difference from zero-initialization.\\n\\n**A2:** We thank the reviewer for the constructive suggestions. To make the inference more rigorous, we analyze two representative std settings including $std=0.05$ and $std=0.0005$, which correspond to a late moment (a large std) and an early moment (a small std), respectively.\\n\\nWe first illustrate their weight distributions of $W_{\\\\alpha}$ in the conditioning mechanism and compare them with that of adaLN-Zero and adaLN-Gaussian ($std=0.001$). The results are shown in **Fig. 21** of our revised paper in Appendix.9. It can be seen that a large std $std=0.05$ presents a relatively uncompact distribution and exhibits a significant discrepancy in distribution shape compared to the rest settings. This result indicates that a large std may be incompatible with other parameters, resulting in a slow speed of convergence and a poor performance. Moreover, we consider this a step further. Theoretically, if we further increase the std value, it would become close to the default initialization in adaLN-Step1 (xavier_uniform) while the performance of adaLN-Step1 is also bad.\\n\\nFor a small std $std=0.0005$, it can be seen that the distribution of $W_{\\\\alpha}$ is quite similar to that of adaLN-Zero and adaLN-Gaussian ($std=0.001$). However, there still exists a slight discrepancy. To make this discrepancy clearer, we average the absolute values \\u200b\\u200bof the differences between each element in $W_{\\\\alpha}$ corresponding to $std=0.0005$ and adaLN-Zero, and $std=0.0005$ and adaLN-Gaussian. The element-wise averaged results are 0.0121 and 0.0124, respectively. By comparing the results (0.0121 < 0.0124), it is shown that small std leads to weights relatively closer to that of zero-initialization (adaLN-Zero). And, to some extent, the corresponding performance also proves it where $std = 0.0005$ produces 80.68 for FID, closer to adaLN-Zero (78.99) compared to adaLN-Gaussian (76.21).\\n\\nWe thank the reviewer again and have revised the corresponding part in our paper (as highlighted in blue), which makes our paper clearer and reasonable.\\n\\n**Q3:** If there is no limit on steps, will the results of adaLN-Zero catch up with adaLN-Gaussian when training for more steps? Do you have any relevant experimental results? Maybe different initialization methods mainly affect the convergence speed.\\n\\n**A3:** Yes, if there is no limit on steps, the performance of adaLN-Zero could catch up with that of adaLN-Gaussian when training for more steps. For example, adaLN-Zero training for 800K produces 14.73 FID, catching up with adaLN-Gaussian training for 600K (14.84 FID).\\n\\nAnd we agree with the reviewer's comment that different initialization methods mainly affect the convergence speed. Essentially, adaLN-Gaussian only changes the initialization strategy and does not alter any structure of DiT model, which means the model's capacity is the same. Therefore, theoretically, adaLN-Gaussian will not significantly influence the final converged performance but converge faster. For example, adaLN-Gaussian achieves an FID of 14.84 after 600K training steps, while adaLN-Zero requires 800K steps to reach a similar FID of 14.73. When further extending the training time, adaLN-Gaussian achieves an FID of 10.83 after 1.5M training steps while adaLN-Zero achieves requires around 2.4M steps to reach a similar FID of 10.67.\"}",
"{\"title\": \"A Kind Reminder to Reviewer KCHJ about our rebuttal\", \"comment\": \"Dear Reviewer KCHJ,\\n\\nWe hope that this letter finds you well.\\n\\nAs the end of this discussion is going to be close, we would greatly appreciate it if you could let us know whether our responses and revisions align with your expectations and have addressed your concerns. We may need to highlight our new experiment where adaLN-Gaussian achieves 2.27 FID in 5400K training steps, less than 7000K in adaLN-Zero which is the longest training steps in DiT paper. This result demonstrates that our method adaLN-Gaussian is also superior over adaLN-Zero in the scaled training for final convergence.\\n\\nIf there are any remaining issues where further clarification is needed, please don\\u2019t hesitate to let us know. We are more than willing to provide additional explanations.\\n\\nThank you once again for your time and effort. We look forward to hearing your thoughts and would greatly appreciate it if you could consider kindly raising your rating.\\n\\nSincerely,\\n\\nAuthors\"}",
"{\"title\": \"New Response to Reviewer ZpLc\", \"comment\": \"Dear Reviewer ZpLc:\\n\\nThank you for your comment; we are delighted to engage in this discussion! We apologize for the absence of a theoretical analysis. Our experiments are still ongoing and require additional time. We will share the results and provide further evidence promptly within the discussion period.\"}",
"{\"title\": \"Response to Reviewer ZpLc\", \"comment\": \"**Q1:** Could the authors provide a more detailed explanation of why Gaussian distributions are used to initialize weights?\\n\\n**A1:** Yes, we feel very sorry for the confusion and are very willing to explain the reason for using Gaussian distributions. The logic is that:\\n\\nFirst of all, by comparing the differences between adaLN-Zero and adaLN, our analysis studies three elements that collectively drive the performance enhancement: 1) an SE-like structure, 2) zero-initialized value, and 3) a \\u201cgradual\\u201d update order. Though previous work thinks the manner of shortcuts plays a major role, our analysis finds that it is a good zero-initialized location itself that plays a significant role, which indicates that it is important to find a suitable initialization.\\n\\nFurther, we find that, from the perspective of weight distribution, though weights in the conditioning mechanism are zero-initialized, after a certain number of training steps, they transition from zero distributions to Gaussian-like distributions. Hence, inspired by this, our insight is that we can expedite this distribution shift by directly initializing the weights via a suitable Gaussian distribution to potentially accelerate training.\\n\\nWe have revised the corresponding part of the paper (as highlighted in blue) to make it clearer.\\n\\n**Q2:** More extensive and general experiments\\n\\n**A2:** We thank the reviewer for the kind and constructive suggestions. To further show the effectiveness of adaLN-Gaussian, we add more experiments on other datasets including Tinyimagenet, AFHQ, and CelebA-HQ using the best-performing DiT-XL/2 with 50K training steps while keeping all training settings. Moreover, we also use another DiT-based model SiT-XL/2 [1] training on ImageNet1K 256x256 for 50K to further show the effectiveness and generalization of adaLN-Gaussian. We report all the results below. These results show that adaLN-Gaussian consistently outperforms adaLN-Zero, demonstrating the effectiveness of our method.\\n\\n| Dataset | Tiny Imagenet| AFHQ | CelebA-HQ | ImageNet1K (SiT-XL/2)\\n|----------|----------|----------|----------| ----------|\\n| adaLN-Zero | 37.11 | 13.52 | 8.01 | 71.90 |\\n| adaLN-Gaussian | 36.07 | 12.58 | 7.54 | 67.15|\\n\\nWe hope our response will clarify the reviewer's confusion and alleviate the concern. And we sincerely hope to obtain support from the reviewer.\\n\\n[1] Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. ECCV2024\"}",
"{\"summary\": \"This paper investigates adaLN-Zero, a key component of DiT. They find three key factors that contribute to the superior performance of adaLN-Zero: an SE-like structure, a good zero-initialized value, and a gradual weight update order. The second one plays the most important role. Finally, they propose adaLN-Gaussian, which achieves better results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is easy to follow, and the experiments are relatively sufficient.\\n2. This paper provides some inspiring conclusions.\", \"weaknesses\": \"1. Part of the description needs further explanation, e.g. line 208\\uff1aThey also zeros out weights of all modulations including W\\u03b31,W\\u03b21, W\\u03b32, and W\\u03b22 in a block, rendering \\u03b31, \\u03b21, \\u03b32, and \\u03b22 zero.\\n2. Some of the inferences are intuitive, and it would be better if there were more rigorous analysis. e.g. line 448\\uff1aMoreover, this moment should be neither too late, ... ,nor too early, as there may be minimal difference from zero-initialization.\", \"questions\": \"1. From line 129~130, we know that all alpha are initialized to 0. Can you expalin line 209: why are gama1, beta1, gama2, beta2 also zero?\\n2. In Table 4, if there is no limit on steps, will the results of adaLN-Zero catch up with adaLN-Gaussian when training for more steps? Do you have any relevant experimental results? Maybe different initialization methods mainly affect the convergence speed.\\n3. In addition to adaLN-Zero, have you analyzed the distribution of parameters of other parts of DiT? Do they also end up being Gaussian? How were these parameters initialized?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"New Response to Reviewer p8jh about the scaled training\", \"comment\": \"Dear Reviewer p8jh\\n\\nWe thank you for your reply and we also feel sorry for the confusion you mentioned. To better help you evaluate our method, we further extend the training steps to illustrate the efficacy of our method in scaled training.\\n\\nOur experiments show that adaLN-Gaussian produces 10.09 for FID in 2300K training steps, outperforming adaLN-Zero in 2352K training steps (10.67 FID in its paper). Additionally, we find that using the same cfg=1.5 as adaLN-Zero, adaLN-Gaussian achieves 2.27 FID in 5400K training steps, less than 7000K in adaLN-Zero which is the longest training steps in DiT paper. These results show the superiority of adaLN-Gaussian to adaLN-Zero in the scaled training.\\n\\nWe hope our additional results could remove your concerns and help you to evaluate our method. And we also sincerely hope to obtain support from you during the reviewer/AC discussion.\\n\\nAuthors\"}",
"{\"title\": \"A concise summary to assist the AC in evaluating our work\", \"comment\": \"## The contributions of this work\\n\\nThis work starts from an important but often overlooked module in DiT called adaLN-Zero. Initially introduced in the DiT paper, adaLN-Zero demonstrated superior performance compared to its counterpart, adaLN, which piqued our interest and prompted us to investigate further. Through step-by-step analysis, we provide some interesting conclusions and propose a simple yet effective refinement with one line code replaced, showing a promising pathway for future generative models. Our contributions can be summarized as follows:\\n\\n- We study three key factors that collectively contribute to the superior performance of adaLN-Zero: an SE-like structure, a good zero-initialized value, and a gradual weight update order. Among them, we find that the a good zero-initialized value plays the most pivotal role.\\n- Based on the distribution variation of condition modulation weights, we heuristically leverage Gaussian distributions to initialize each condition modulation, termed **adaLN-Gaussian**.\\n- We conduct comprehensive experiments across various settings to demonstrate adaLN-Gaussian\\u2019s effectiveness and generalization including 1) different datasets (ImageNet1K, Tiny Imagnet, AFHQ, and Celeb-HQ), 2) DiT variants (DiT-B, DiT-L, and DiT-XL), and 3) DiT based models (VisionLlama, U-DiT, and SiT). We also involve 4) different training durations. Particularly, our method takes 5400K training steps to achieve 2.27 FID, faster than adaLN-Zero which uses 7000K steps, the longest steps in DiT paper.\\n\\n## The concerns from reviewers in discussion and how we alleviate them\\n\\nDuring the past discussions, the reviewers raised several concerns regarding our paper and our responses. We summarize them and show how we alleviate it as follows:\\n\\n### Concerns from Reviewer ZpLc\\n\\n**Concern 1: Lack of Mathematical Analysis**\\n\\n* The reviewer pointed out that the mathematical analysis they requested was missing.\\n* **How we alleviate it** : We provide the mathematical analysis from two aspects. 1) Inspired by previous study[1], we investigate the variance of the response of the scale element $\\\\alpha$ in a condition modulation under different initialization strategies including zero-initialization (adaLN-Zero), gaussian initialization (adaLN-Gaussian), and the default xavier uniform initialization (adaLN-Mix), analyzing the relationships between the variance and the performance of different initialization strategies. 2) We use the nearest neighbor nonparametric estimation method to demonstrate that adaLN-Zero indeed forms a Gaussian-like distribution.\\n \\n [1] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks.\\n\\n**Concern 2: Unclear Necessity of Gaussian Distribution for Weight Initialization**\\n\\n* The reviewer was unclear about the necessity of using Gaussian distribution for weight initialization.\\n* **How we alleviate it** : We explain two reasons for the necessity of using Gaussian distribution for weight initialization. First, the significant improvement of adaLN-Zero over adaLN shows the importance of condition mechanism for superior performance. Thus it is necessary to select a more suitable initialization. Secondly using Gaussian is motivated by our statistics results: we find that weights in the conditioning mechanism, though zero-initialized, always transition to a Gaussian-like distribution. Thus, using a suitable Gaussian initialization could better expedite this distribution shift.\\n\\n**Concern 3: Insufficient Extended Experiments**\\n\\n* The reviewer noted that the extended experiments with only 50K training steps were insufficient to demonstrate the effectiveness of adaL\\u039d - Gaussian over longer training periods.\\n* **How we alleviate it** : We present the results of extended training on different datasets to prove the long - term effectiveness of adaL\\u039d-Gaussian, e.g., an additional 50K for Tiny ImageNet, 150 K for ImageNet1K (SiT-XL/2). We do not extend more training steps for Tiny Imagenet, AFHQ, and CelebA-HQ as our experiments show this will lead to overfitting, an increase in FID. We also show that our method takes 5400K to achieve 2.27 FID, faster than adaLN-Zero which uses 7000K steps, the longest steps in DiT paper.\\n\\n### Concerns from Reviewer p8jh and nDDD (regarding scaled training)\\n\\nThey are both concerned about the scaled training of our method for DiT, namely whether our method could outperform adaLN-Zero after a long training time and could be faster to achieve the finally converged performance (2.27 FID in DiT).\\n\\n* **How we alleviate it** : We report the FID results below which show the superiority of adaLN-Gaussian in the scaled training and it is faster than adaLN-Zero to achieve 2.27 FID, which uses 7000K steps, the longest steps in DiT paper.\\n\\n| Dataset | ImagetNet (cfg=1) | ImagetNet (cfg=1.5) | \\n|----------|----------|----------|\\n| adaLN-Zero | 10.67 (2352K) | 2.27 (7000K) |\\n| adaLN-Gaussian | 10.09 (2300K) | 2.27 (5400K) |\"}",
"{\"title\": \"Response to Reviewer p8jh\", \"comment\": \"**Q1:** How can you find the similarity between adaLN and SE archtecture\\uff1f This is an intersting point. Hope that authors can provide some principles but not intuitions.\\n\\n**A1:** We thank the reviewer's comment and we are very willing to explain how we find such similarity. We primarily find the connections between them from three aspects: 1) overall structure; 2) module function; and 3): detailed mathematical formula.\\n\\nFirst, from the view of the overall structure, the adaLN-Zero, and SE module both serve as a side pathway compared to the main path.\\n\\nThen, from the view of the module function, scaling element $alpha$ in adaLN-Zero plays a similar role as the SE module, both of which aim to perform a channel-wise modulation operation. Hence, to achieve it, they may yield outputs of the same structure, e.g., vector, thereby sharing some similarity in output formulation.\\n\\nFinally, from the view of the mathematical formula, though we may not directly find the similarity due to the existence of the SiLU function, a more detailed expandation allows us to find a close connection in the mathematical formula between the adaLN-Zero and SE module. With a slight adjustment according to experience, the similarity appears.\\n\\nWe thank the reviewer's question and have added the response as well as a figure of the structure of the SE module to our paper (as highlighted in blue), which makes the start point clearer.\\n\\n**Q2:** As shown in Figure2, the performance of Gaussian initialization is a U-shaped trending. Could you please some analysis about this trending? Why the large std can brings a relatively bad results.\\n\\n**A2:** The reviewer may want to say Table 2 instead of Figure 2. Intuitively, since the weights of the conditional mechanisms we counted are Gaussian-like distributions, there should exist an optimal std hyperparameter when initializing these weights with Gaussian, and naturally, the values \\u200b\\u200bon both sides of this hyperparameter are relatively unsuitable.\\n\\nWe follow the reviewer's advice and analyze this U-shaped trending by leveraging two representative settings, i.e., $std=0.0005$ and $std=0.05$, which are the two ends of U-shaped results.\\n\\nWe first illustrate their weight distributions of $W_{\\\\alpha}$ in the conditioning mechanism and compare them with that of adaLN-Zero and adaLN-Gaussian ($std=0.001$). The results are shown in **Fig. 21** of our revised paper in Appendix.9. We find that a large std $std=0.05$ presents a relatively uncompact distribution and exhibits a significant discrepancy in distribution shape compared to the rest settings. This result indicates that a large std may increase the difficulty of optimization, resulting in a slow speed of convergence. Moreover, we consider this a step further. Theoretically, if we further increase the std value, it would become close to the default initialization in adaLN-Step1 (xavier_uniform) while the performance of adaLN-Step1 is also bad.\\n\\nFor a small std $std=0.0005$, we find that the distribution of $W_{\\\\alpha}$ is more compact and is quite similar to that of adaLN-Zero and adaLN-Gaussian ($std=0.001$). Though similar, there still exists a slight discrepancy. To make this discrepancy clearer, we average the absolute values \\u200b\\u200bof the differences between each element in $W_{\\\\alpha}$ corresponding to $std=0.0005$ and adaLN-Gaussian, and adaLN-Zero and adaLN-Gaussian. The results are 0.0124 and 0.0122, respectively, which indicates that there is still an element-level value shift in weight matrice. The reason that the performance of $std=0.0005$ is relatively bad in Table 2 may possibly be because $W_{\\\\alpha}$ in $std=0.0005$ is farther away from that in adaLN-Gaussian compared to adaLN-Zero since 0.0124>0.0122. Note that though the difference is small, considering the large number of elements (about 74.3M), the influence may not be ignored.\\n\\nWe hope our response will clarify the reviewer's confusion. And we sincerely hope to obtain support from the reviewer.\"}",
"{\"title\": \"Response to Reviewer nDDD about adaLN-Gaussian Convergence Concern\", \"comment\": \"Dear Reviewer nDDD,\\n\\nWe sincerely appreciate your patience and apologize for the delay in responding to your question. Over the past few days, we keep other settings unchanged and continue training our adaLN-Gaussian initialized DiT.\\n\\nWe find that adaLN-Gaussian achieves 10.09 FID in 2300K training steps, outperforming adaLN-Zero in 2352K training steps (10.67 FID in its paper).\\n\\nMore importantly, as the reviewer requested, using the same cfg=1.5 as adaLN-Zero, we find that adaLN-Gaussian achieves 2.27 FID in 5400K training steps, less than 7000K in adaLN-Zero. saving around 23% training time with simply one line code replaced.\\n\\nWe hope our response is not too late to address your concerns and could help you reassess the contributions of our work. Also, we promise to add these results in our final version and would greatly appreciate it if you could consider kindly raising your rating! Thank you once again for your patience.\\n\\nSincerely,\\n\\nAuthors\"}",
"{\"title\": \"Responses to Reviewer ZpLc about theoretical analysis and more experiments (1/2)\", \"comment\": \"Dear Reviewer ZpLc,\\n\\nWe sincerely appreciate your patience and apologize for the delay in responding to your question.\\n\\nInspired by previous study[1], we consider to investigate the variance of the response of a scale element $\\\\alpha$ in a condition modulation under different initialization strategies including zero-initialization (adaLN-Zero), gaussian initialization (adaLN-Gaussian), and the default xavier uniform initialization (adaLN-Mix).\\n\\nFor simplicity, we omit the bias term. Thus, for a scale elemen $\\\\alpha$ as shown in Eq 1 in our paper, it is formulated as:\\n\\n$\\\\alpha = \\\\operatorname{SiLU}(c) * W_{\\\\alpha} = (c \\\\cdot \\\\operatorname{Sigmoid}(c)) * W_{\\\\alpha} $\\n\\n$\\u2217$ is matrix multiplication and $\\\\cdot$ is Hadamard product.\\n\\nWe let the initialized elements in $W_{\\\\alpha}$ be mutually independent and share the same distribution. Similar to [1], we assume that the elements in $c$ are also mutually independent and share the same distribution, and $c$ and $W_{\\\\alpha}$ are independent of each other. Then we have\\n\\n$\\\\operatorname{Var}\\\\left[\\\\alpha_{i}\\\\right] = n\\\\operatorname{Var}\\\\left[(c_{i} \\\\cdot \\\\operatorname{Sigmoid}(c_{i})) * w_{\\\\alpha} \\\\right] $\\n\\nwhere $n$ is a scalar that represents number of neural connections, $\\\\alpha_{i}$, $c_{i}$, and $w_{\\\\alpha}$ represent the random variables of each element in $\\\\alpha$, $c$, and $W_{\\\\alpha}$, respectively. We know that all initialization strategies allow $w_{\\\\alpha}$ to have zero mean. Then the variance of the product of independent variables gives us:\\n\\n$\\\\operatorname{Var}\\\\left[\\\\alpha_{i}\\\\right] = n \\\\operatorname{Var}\\\\left[w_{\\\\alpha} \\\\right] \\\\operatorname{E}\\\\left[(c_{i} \\\\cdot \\\\operatorname{Sigmoid}(c_{i}))^{2} \\\\right] $\\n\\nTherefore, we can see that the variance of an element in $\\\\alpha$ depends on $\\\\operatorname{E}\\\\left[(c_{i} \\\\cdot \\\\operatorname{Sigmoid}(c_{i})) \\\\right]$ and $n\\\\operatorname{Var}\\\\left[w_{\\\\alpha} \\\\right]$ while $n\\\\operatorname{Var}\\\\left[w_{\\\\alpha} \\\\right]$ is highly related to our method.\\n\\nIn adaLN-Zero case, $n\\\\operatorname{Var}\\\\left[w_{\\\\alpha} \\\\right]$ could be 0. In adaLN-Gaussian case, $n\\\\operatorname{Var}\\\\left[w_{\\\\alpha} \\\\right]$ is around 0.012. In adaLN-Mix case, $n\\\\operatorname{Var}\\\\left[w_{\\\\alpha} \\\\right]$ is 1.04. We know that at the early stage of training, due to random initialization, $c$ would vary a lot, intoducing a large variance. To some extent, this conditioning input may be unreliable and disturb the model optmization. AdaLN-Zero and adaLN-Gaussian depress this variance, allowing the model to learn more stably, while adaLN-Mix fails to do so. As a result, AdaLN-Zero and adaLN-Gaussian learn faster than adaLN-Mix.\\n\\n[1] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics, pages 249\\u2013256, 2010.\"}"
]
} |
E4kuNZWost | TULiP: Test-time Uncertainty Estimation via Linearization and Weight Perturbation | [
"Yuhui Zhang",
"Dongshen Wu",
"Yuichiro Wada",
"Takafumi Kanamori"
] | A reliable uncertainty estimation method is the foundation of many modern out-of-distribution (OOD) detectors, which are critical for safe deployments of deep learning models in the open world. In this work, we propose TULiP, a novel, theoretically-driven, post-hoc uncertainty estimator for OOD detection. Our method considers a hypothetical perturbation applied to the network prior to convergence. Based on linearized training dynamics, we bound the effect of such perturbation, resulting in an uncertainty score computable by perturbing model parameters. Ultimately, our approach computes uncertainty from a set of sampled predictions, thus not limited to classification problems. We visualize our bound on synthetic regression and classification datasets. Furthermore, we demonstrate the effectiveness of TULiP using large-scale OOD detection benchmarks for image classification. Our method exhibits state-of-the-art performance, particularly for near-distribution samples. | [
"Out-of-distribution detection",
"Uncertainty Quantification",
"Lazy Training",
"Neural Tangent Kernel"
] | Reject | https://openreview.net/pdf?id=E4kuNZWost | https://openreview.net/forum?id=E4kuNZWost | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"rJvhbhYx1q",
"ppKXwzLzzV",
"pkMObr1n02",
"p0n1TZxdYN",
"ofRR94PYqX",
"oEXsg0XiVg",
"mDor2ZYxrt",
"kTzEzm1glN",
"gl7s0IFYFQ",
"eqWjv0pdgb",
"dy81bHTKSS",
"a5jeKNUbSQ",
"Yv9XjzB22z",
"No8AUCIV0c",
"JduNaps1IL",
"FQeKAsPW9D",
"Dc9RtQE7So",
"CBiFbDjqws",
"AB8MDUXRXu",
"96BxMocbQH",
"4a2z7sbIBN",
"3nuwipQsl6",
"3nH9s2zb3M",
"06NHVIjF50"
],
"note_type": [
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1730541421913,
1737524143947,
1732257931835,
1732686870532,
1732706781116,
1732256912031,
1732257152223,
1732554601449,
1732706933890,
1732257442448,
1733107447690,
1732790537675,
1732707089872,
1732257819835,
1732257498500,
1734437809003,
1730604189514,
1732257097998,
1732711861402,
1732710224917,
1732623195316,
1730125179517,
1732707316064,
1733213288033
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11751/Reviewer_3GdG"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Reviewer_mnne"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Reviewer_enPE"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Reviewer_mnne"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Area_Chair_gQW9"
],
[
"ICLR.cc/2025/Conference/Submission11751/Reviewer_mnne"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Reviewer_enPE"
],
[
"ICLR.cc/2025/Conference/Submission11751/Reviewer_enPE"
],
[
"ICLR.cc/2025/Conference/Submission11751/Reviewer_3GdG"
],
[
"ICLR.cc/2025/Conference/Submission11751/Reviewer_enPE"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11751/Authors"
]
],
"structured_content_str": [
"{\"summary\": \"This paper proposes a novel uncertainty estimation method TULiP for OOD detection. The core idea of the paper is to generate uncertainty scores by perturbing model parameters based on linearized training dynamics.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is clearly written and experiments have been extensively conducted across a set of diverse datasets.\", \"Theoretical analysis is thorough.\"], \"weaknesses\": [\"There are so many hyperparameters that implementing the method in realistic scenario may have some difficulties.\", \"Performance on far OOD is not good enough.\"], \"questions\": \"How does the performance compared to deep ensembles[1]\\uff1f\\n\\n[1] Simple and scalable predictive uncertainty estimation using deep ensembles\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"**W8: Alternative architectures such as BiT and ViT.**\\n\\nWe recognize the great success of transformer-based models in recent research. However, they offer a significantly different architecture compared to convolutional networks, with the introduction of self-attention layers, etc. Applying and implementing TULiP to such architectures would require a great effort, and we are still exploring TULiP over a broader spectrum of network architectures in our future research. We will revise our manuscript to emphasize this limitation.\\n\\n**W9: TULiP and Covariate-shift OOD**\\n\\nThank you for pointing that out. In Table 2, the reported metrics are in AUROC.\\nIt might be true that some compared methods are specified to SS-OOD. However, most of the methods are agnostic to SS-OOD and CS-OOD, especially e.g., MC-Dropout and MDS, which explicitly included experiments in a Covariate-Shift setting in their original work.\\n\\nTULiP aims to detect covariate-shift OOD samples. To further validate TULiP on CS-OOD, *we have included a new experiment in the revised manuscript Appendix C.4*. We compared TULiP with other UQ methods that are not specified to SS-OOD, using a well-established ImageNet-C Gaussian Blur experiment. The results further suggest TULiP's effectiveness in the CS setting.\\n\\nMoreover, thank you for mentioning the study [3]. While we saw some relations between our research and [3], let us remark on some differences as follows: the authors of [3] discussed a Semantically Coherent OOD setup, where data with ID labels in the OOD dataset is considered ID. Their proposed method, UDG, involves unsupervised learning with unlabeled OOD samples, i.e., to be precise, different from our problem setting, and is therefore difficult to fairly compare [3] and TULiP. We will mention [3] in the related work section of our revised manuscript.\"}",
"{\"title\": \"Further comments\", \"comment\": \"Thank you very much for the detailed responses. The overall idea is interesting. After carefully reading the revised paper again, there are a few questions:\\n1. In line 33, it says \\\"Under our problem setting, neither the distribution of initialized models nor the training process is accessible, .......... perturbation applied towards the network function f(x), at a time t = ts before the training terminates at t = T.\\\" \\n\\nThis seems to be contradict with each other, as it requires the model before convergence, which means the training process is accessible.\\n\\nBesides, if you can access the training process, why cannot access the distribution of initialized models. Besides, the initialization shall be known random distributions.\\n\\n2. If the (partial) training process can be accessed (at time t= ts), it would not be hard to know \\\\hat{f}_{T}(z), then the bound of differences ||f_{T}(z) - \\\\hat{f}_{T}(z)|| should be known. Then, we might not need such sophisticated methods for bounding this term.\\n\\n3.A bit more explanations on why Eq 6 holds. Though some people can see this can be achieved by expanding the left term, a bit more explanations can help more people to understand this equation.\\n\\n4.Following above the questions, can the author list some practical scenarios, the settings in the paper will happen? \\n\\nOverall, the math is clean and neat in this paper which shall be appreciated. However, the reviewer is unsure about the motivation for this work. Maybe this work can be reformulated as some methods for uncertainty estimation, like what the dropout as Bayesian method paper will do. Please correct me if I misunderstood anything. Thanks again for the responses.\"}",
"{\"comment\": \"We appreciate your response and your regonization towards our theortical contributions. Thank you again for your great effort on reviewing our work.\\n\\nWe have uploaded a revision of our manuscript regarding to your thoughtful concerns and invaluable comments.\", \"regards_to_your_last_reply\": \"- It is correct that, as you have said, in our setting neither the distribution of initialized models nor the training process is accessible. **We have no access to the training process and therefore we did not perturb the network prior to convergence in practice** (lines 152-156). In fact, the functional perturbation prior to convergence is only for establishing the theortical framework, ultimately leads us to our bound (Thm 3.1, Prop 3.3), and a practical method (TULiP) to evaluate this bound. In practice, since we don't know $t_s$ or $\\\\theta_{t_s}$, to estimate the bound we have assumed $t_s = 0$ and $\\\\theta_{t_s} \\\\approx \\\\mathbb{E} \\\\theta_{t_s} = \\\\mathbf{0}$. In our revised manuscript, we have emphasized this in lines 137-139.\\n- For Eq. 6, thank you for your comments. We have revised our paper with a remark in line 194 and lines 989-992.\\n- For practical scenarios: As we address your concerns above, it might be clear that our work is potentially broadly applicable in practice, for example, autonomous driving and medical applications (line 29). TULiP works in a post-hoc setting, where only the trained parameters are required to apply our method, without access to training data and training process. Post-hoc methods are widely used in practice (lines 42-47).\\n\\nWe hope that the above explanations will provide you a clearer intuition of our work and motivation.\\n\\n\\nBelow, we briefly summarize the revisions related to your previous concerns as follows:\\n\\n1. Added Appendix A.7: Discussion about connection between Eq. 5 and Eq. 1 (W1).\\n2. Added Appendix C.6: Discussion regarding computational efficiency (W3), where we have included wall-clock time comparision between TULiP and single forward-pass methods (EBO).\\n3. Revised line 417: Clarification of ID/OOD setups (W4).\\n4. Revised line 264, 535: Highlighting the limitation of current layer-wise scaling scheme (W7).\\n5. Appendix C.2 (Added **Figure 5**): We added an additional empirical evidence justifying our layer-wise scaling scheme, where the scaled NTK and in-training **empirical NTK is directly computed and compared** using training data (W7). Please refer to Figure 5 (line 1361) for details.\\n\\nWe hope that above explanations will help address your concerns, as we look forward to your response. We sincerely appreciate your constructive comments and your time in reviewing our paper.\"}",
"{\"comment\": \"We appreciate your review of our manuscript and valuable feedback, especially on the theoretical aspects. We will provide additional clarifications and address your concerns here.\\n\\nWe uploaded a revision of our manuscript. In the following, unless otherwise mentioned, we refer to sections and figures/tables in our original manuscript.\\n\\n**W1: What's the motivation for calculating the upperbound of variations for uncertainty quantification?**\\n\\nFirstly, in our post-hoc setting, we have no access to training data, process, and model parameters prior to convergence; we only have a trained, converged model. In such a case, direct computation of Eq. (1) is impossible. Furthermore, the perturbation in Theorem 3.1 is hypothetical (line 154). As we explained around lines 150-154, perturbation to model parameters prior to convergence is also intractable in a post-hoc setting since it requires re-training. Therefore, a bound like Eq. (5) is derived to overcome such difficulty. More importantly, our method addresses the uncertainty caused by the training process (lines 133-135) as it is critical (lines 43-50) to uncertainty estimation.\\n\\nDeep Ensembles [1] computes Eq. (1) in its exact form. In *Appendix C.4 of the revised manuscript*, we provide additional experiments comparing our method and Deep Ensembles, alongside other methods.\\n\\nWe also note that our perturbation scheme, hence Theorem 3.1, links back to Eq. (1) in the following sense:\\n\\nIn our theoretical framework, we consider the infinite width limit under the NTK scaling [2]. Under such limit, the empirical NTK (at initialization) converges to a specific deterministic kernel $\\\\Theta$, where the distribution of a neural network $f(x; \\\\theta)$'s initialization functional $f_\\\\mathrm{Init}(x)$ converges to a Gaussian Process (NNGP). In Eq. (2), it is equalivent to a deterministic (fixed) $\\\\left. \\\\nabla_\\\\theta f_\\\\mathrm{True}(x) \\\\right|\\\\_{\\\\theta = \\\\theta^\\\\ast}$ and a stochastic $f_\\\\mathrm{Init}$ following the NNGP.\\n\\nUsing the model defined in Eq. (2) and the training process described in Eq. (3), Eq. (1) effectively becomes:\\n\\n$\\\\mathrm{Var}\\\\_{f_\\\\mathrm{Init} \\\\sim \\\\mu_\\\\mathrm{NNGP}}[f_T(x;\\\\theta | \\\\mathrm{Init} = f_\\\\mathrm{Init})],$\\n\\nwhere $f_T(x;\\\\theta | \\\\mathrm{Init} = f_\\\\mathrm{Init})$ indicates a network trained via Eq. (3) by time $T$, with $f_\\\\mathrm{Init}$ as initialization.\\n\\nWhen we set $t_s = 0$ (the initialization time), the perturbation $\\\\Delta f$ will be applied to $f_\\\\mathrm{Init}$. Therefore, given a fixed initialization $f_0$ to perturb, Eq. (5) computes an upper-bound over a perturbation of the initialization functional:\\n\\n$\\\\mathrm{Var}_{\\\\Delta f}[f_T(x;\\\\theta|\\\\mathrm{Init} = f_0 + \\\\Delta f)],$\\n\\nsince $\\\\hat{f}_T$ is supposed to be trained from initialization $f_0 + \\\\Delta f$, we have $\\\\hat{f}_T = f_T(x;\\\\theta | \\\\mathrm{Init} = f_0 + \\\\Delta f)$, hence the above.\\n\\nComparing it to the previous equation, we see that the difference between them is the distribution of the initialization functional $f_\\\\mathrm{Init}$. In Eq. (1), $f_\\\\mathrm{Init}$ comes from the NNGP; while in Eq. (5), it is centered around $f_0$ with a stochastic perturbation $\\\\Delta f$. Intuitively, by using Eq. (5), we are approximating the predictive variance trained from the NNGP prior with the predictive variance trained from a random perturbed initialization $f_0 + \\\\Delta f$. Figure 2 demonstrates the effectiveness of such an approximation. Notably, TULiP is tractable under our post-hoc setting, whereas Eq. (1) is not and typically requires tremendous computational effort.\\n\\nWe will add above discussion to our revised manuscript.\\n\\n[1] Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles, NeurIPS'17 \\n[2] Neural tangent kernel: Convergence and generalization in neural networks, NeurIPS'18\"}",
"{\"comment\": \"**W7: Approximation of the true NTK**\\n\\nThank you for pointing out this important aspect.\\nFirst, in Figure 1 a-c), the color represents the number of parameters in each convolutional layer, with blue corresponding to layers with more parameters and green to those with fewer. Only one network was trained to produce all three parts of Figure 1 a-c).\", \"regarding_the_following_question\": \"> This approximation only considers the impact of parameters in each layer and does not account for the effect of the order of layers with the same parameters in the network.\\n\\nIf this question implies that the approximation in Eq. (11) does not account for the interplay between layers in deep neural networks, we acknowledge your concern. Indeed, our scaling factor $\\\\Gamma$ is based solely on the number of parameters in the layers, potentially ignoring their order and inter-layer dynamics.\\n\\nWhen the neural network has only a single hidden layer, the NTK can be well-approximated using Eq. (11) based on the definition of the NTK. However, as demonstrated in our numerical experiments (Section 5), our approach performs well even without explicitly considering inter-layer relationships, relying only on the number of parameters in the layers. We chose this approach for its simplicity, as investigating inter-layer dynamics would require significant additional effort.\\n\\nAs you have kindly pointed out, further investigation is indeed necessary for deep neural networks. We will revise the manuscript to highlight this limitation more explicitly.\\n\\nIf we have misunderstood your question, please let us know, and we will reply accordingly.\"}",
"{\"comment\": [\"Thanks for the response.\", \"Regards to GEN [2]: I believe there may be some misunderstanding regarding my comments on GEN. At no point did I claim that TULiP is the same as GEN. Instead, I noted that the OOD score utilized in the paper has already been explored in GEN, which is accurate. Additionally, I highlighted that TULiP differentiates itself by relying on perturbed predictions. Since you mentioned that TULiP can be integrated with other logit- or predictive-distribution-based OOD detectors, it would be beneficial to include results demonstrating this integration.\", \"Regarding performance drop on ImageNet-1K: If the checkpoint has a significant impact, reporting the average results would be more appropriate. Furthermore, the paper currently lacks a discussion on why performance degrades when using ImageNet-1K as the in-distribution dataset. Including such a discussion would enhance the paper's insights.\", \"I strongly recommend that the authors carefully develop the related work section to better position their contributions. Additionally, conducting more comprehensive experiments is necessary to support for the claims made in the paper.\"]}",
"{\"comment\": \"We would like to thank you again for your great effort on reviewing our work and your appreciation. We are happy that your concerns have been addressed.\\n\\nWe have uploaded a revision of our manuscript regarding to your thoughtful concerns and invaluable comments.\", \"we_briefly_summarize_the_related_revisions_as_follows\": \"1. Revised Appendix B.2: Added discussion on determining hyper-parameters in practice (W1).\\n2. line 462: We emphasized the results shown in Appendix C.3 (W2).\\n\\nWe sincerely appreciate your time in reviewing our paper. If you have any questions or concerns about our work, we are glad to provide more discussions.\"}",
"{\"comment\": \"First of all, we thank the reviewer for reviewing the manuscript and acknowledging our contributions. We provide additional clarification and explanations for your concerns.\\n\\nWe uploaded a revision of our manuscript. In the following, unless otherwise mentioned, we refer to sections and figures/tables in our original manuscript.\\n\\n**W1: There are so many hyperparameters that implementing the method in a realistic scenario may have some difficulties.**\\n\\nIndeed, TULiP involves $\\\\epsilon, \\\\lambda, \\\\delta$ as its hyper-parameters. It is natural that one considers it difficult to adopt for real use cases. However, we have found that the parameter optimization process for TULiP is notably manageable.\\n\\nIn Appendix C.3 and Table 7, we demonstrate the efficiency of the hyper-parameter tuning of TULiP, using torchvision's ImageNetV2 weights. The same hyper-parameter search range, as shown in Table 4, was used. Despite the increased number of hyperparams, TULiP achieved top results (note that ViM requires access to training data) in this challenging scenario, where all methods suffer from a significant performance drop due to the nature of V2 weights (further discussed in Appendix C.3).\\n\\nWhen the validation sets are relatively small, it usually takes ~10mins to search an optimal hyper-parameter set of TULiP. If the time is constrained, $\\\\epsilon$ is the most important parameter (line 501, 1331), and solely tuning on it while fixing a reasonable $\\\\lambda, \\\\delta$ (e.g., suggested value in line 418) usually yields good results (cf. Figure 4). We will add the discussion to our revised manuscript.\\n\\n**W2: Performance on far OOD is not good enough.**\\n\\nAs you have pointed out, TULiP's performance on far-OOD is not as good as its near-OOD results within Table 1. In the table, TULiP is mainly outperformed by either ViM or ASH. However, ViM requires access to training data, which is different from our setup and accessed more information, while ASH is significantly unstable in some cases; see the last row of Table 7 in Appendix C.3. Based on both Table 1 and 7, we consider that TULiP is on par with ASH on the far-OOD scenario. We will add a remark on our far-OOD performance in Table 7 in the main body of our revised manuscript.\\n\\nNote that ASH has no theoretical insights and, therefore, lacks explainability and relies more on heuristics to tune with. This could potentially be one of the reasons that ASH fails in the experiment in Appendix C.3.\"}",
"{\"comment\": \"Thank you for your further clarifications. I will raise my score. But the example you mentioned can be more concrete, like how in detail the method can be used for autonomous driving and medical applications.\"}",
"{\"comment\": \"We appreciate your prompt feedback.\\n\\n- Regarding W8: Thanks for the suggestion. We have revised line 538 to highlight this limitation.\\n- Regarding W9: Thanks for the comment on the study [3]. As you stated, we seem not to understand the comment sufficiently. The reason of our confusion could be that we do not scope to demonstrate the robustness in our manuscript, while the \\\"latter\\\" was emphasized in your response. Would you like to explain more explictly about that comment? We appreciate your clarification. \\nAs for Table 9 of the current revised manuscript (Appendix C.5, previously C.4) on CS-OOD experiments, we intended to further demonstrate TULiP with baselines not explictly designed for SS-OOD. In your comment, you suggested us a fairer comparision if the \\\"latter\\\" is the case, yet TULiP falls into the \\\"former\\\" category (i.e., aims to detect CS-OOD). Nevertheless, we still believe that this experiment can enhance our paper as we added it, since some baseline methods in Table 2 only considered SS-OOD in their original study as you have rightfully pointed out.\\n- ReAct and RankFeat: As you have commented, TULiP is indeed similar to them as an enhancement method that can work with methods in logit and prob spaces. Therefore, we have added them to line 98.\\n- Why we didn\\u2019t use the reproduced results from our machine: During our discussion, we first conducted the experiment using GEN+ViT-B-16, and we obtained the exact same result as being listed in OpenOOD repository [1] since we used the same pretrained weights provided by torchvision. Therefore, we simply cited other baseline results as it will still take shorter time, even though only a single forward pass is required.\\n\\n[1] OpenOOD v1.5 results. https://docs.google.com/spreadsheets/d/1mTFrO-_STYBRcNMMEmHQrFPQzeg6S8Z2vRA8jawTwBw/edit?usp=sharing (Last accessed 2024-11-28 04:04 GMT+0). \\n[3] Semantically Coherent Out-of-Distribution Detection. In ICCV, 2023.\\n\\nOnce again, we are deeply grateful for your invaluable time and effort in reviewing our paper, and we would greatly appreciate your feedback.\"}",
"{\"comment\": \"We would like to thank you again for your prompt and constructive feedback, and for the great effort you put into reviewing our work.\\n\\nWe have uploaded a revision of our manuscript regarding to your thoughtful concerns and invaluable comments.\", \"regarding_to_your_last_reply\": [\"Regards to GEN [2]: Thank you for your clarification. We have revised Section 2, Table 1 and lines 368-370 **in the revised manuscript** to address the difference more clearly. We have reported results for GEN and TULiP+GEN in the revised Table 1.\", \"Performace drop on ImageNet-1K: In **lines 463-473 of the revised paper**, we mentioned the performance drop of TULiP on ImageNet-1K and provided a brief discussion. As you have pointed out, we agree that such analysis will sure be insightful. However, an in-depth (empirical) analysis might be essential to justify any claims and detailed explanations to this phenomena, which leads to an valuable future research direction. Nevertheless, as we stated in the revised paper, AUROC-wise, TULiP still outperforms other baselines (except ASH) by a significant margin.\", \"Carefully develop the related work section: We have revised the entire section 2 to better compare our work to existing works and clarify our contributions. We reduced the text amount of UQ methods and assigned more space to position our contributions, also including GEN [2] into the section.\", \"Reporting the average results would be more appropriate: Thank you for your kindful suggestion. However, we believe that it is important to follow the standard benchmarks established by Zhang et al. (OpenOOD v1.5), where they have only used the V1 weights, in order to enhance the accessibility and soundness of our work. Therefore, we respectfully keep our current setups with V1 weights in our main table.\"]}",
"{\"comment\": \"We are grateful for your thorough and constructive feedback on our manuscript.\\n\\nWe uploaded a revision of our manuscript. In the following, unless otherwise mentioned, we refer to sections and figures/tables in our original manuscript.\\n\\nWe are further revising the paper regarding your legitimate comments, especially on lines 18, 72-73, related works, tables, clarity in Sections 3 and 4, and parts related to early checkpoints.\\n\\n**part of W5: Regards to GEN [2]**\\n\\nThank you for recommending the study [2] as our related work. We will revise our manuscript by including it in the related work section. Here, let us remark on some differences as follows: To be precise, our work differs from GEN [2], since we are focusing on an approximation of the posterior samples rather than Shannon Entropy itself, and perturbation plays a central role in our method. In Algorithm 1, Shannon Entropy is employed as it is a common choice to estimate predictive uncertainty from (surrogate) posterior samples. In fact, TULiP can be integrated with other logit or predictive distribution-based OOD detectors (EBO, GEN, etc.) by substituting line 18 of Algorithm 1.\\n\\n**W6: Performance drop on ImageNet-1K.**\\n\\nAs you have pointed out, TULiP does not outperform ASH on ImageNet-1K near/far-OOD setting of Table 1. However, we would like to emphasize that TULiP outperforms ASH by a large margin on both the near- and far-OOD settings with ImageNetV2 weights; see Table 7 of Appendix C.3. Taking both Table 1 and 7 into account, we recognize our method achieves SOTA performance. Note that we have conducted hyper-parameter tuning to ASH for Table 7, as shown in Table 4, but we were unable to achieve good results. One of the reasons is that we have no insights on how the parameter leads ASH to a good performance other than the suggested range in the original paper, since it does not have any theoretical analyses, unlike TULiP. In the main body of the revised manuscript, we will remark on our performance in Table 7. As for the reason of the degrading performance with TULiP in Table 1 (and ASH in Table 7), we are currently investigating as our future work.\\n\\n**W7: Rearrangement of Table 1.**\\n\\nThank you for your thoughtful suggestions. We have rearranged Tables 1 and 2 to better indicate methods requiring training data access.\"}",
"{\"comment\": \"**Q1: Comparison with Deep Ensembles (DE)**\\n\\nDue to the high computational cost of DE, we are unable to evaluate it under our experiment setup. However, we do have the following comparison on the near-OOD setup of CIFAR-10 and CIFAR-100, following [1]:\\n\\n| | CIFAR10-near | CIFAR100-near |\\n|-|---|---|\\n|DE|90.6|82.7|\\n|TULiP|89.7|81.3|\\n\\nUnder this setup, it is clear that TULiP does not fall much behind Deep Ensembles, despite the restricted access to various information (e.g., training process, training data, etc.) and a much smaller computational overhead.\\n\\nFurthermore, *in the revised manuscript Appendix C.4*, we report experiment results under the setting of [2], where TULiP outperforms Deep Ensemble in this covariate-shift scenario.\\n\\n[1]. OpenOOD: Benchmarking Generalized Out-of-Distribution Detection, NeurIPS'22 Datasets and Benchmarks \\n[2]. Single model uncertainty estimation via stochastic data centering, NeurIPS'22\"}",
"{\"metareview\": \"This paper introduces TULiP, a test-time uncertainty estimation method designed for out-of-distribution (OOD) detection. TULiP leverages a linearized training dynamics framework based on the Neural Tangent Kernel (NTK) to approximate uncertainty at test time. Specifically, the method perturbs the model weights in the linearized space to compute predictive uncertainties, which are then used to distinguish OOD samples from in-distribution (ID) data.\\n\\nWhile TULiP operates at test time, its reliance on training assumptions disqualifies it as a true post-hoc OOD detector.\\nThe authors should reframe their claims and emphasize that TULiP is a training-dependent uncertainty estimation method rather than a universally applicable post-hoc OOD detector.\\n\\nTherefore, it is recommended that the paper be revised and resubmitted in a future version. The authors should carefully address the reviewers' comments provided throughout the review process. Below are additional comments from Reviewer enPE after the rebuttal.\", \"additional_comments_on_reviewer_discussion\": [\"Reviewer enPE has primary concerns regarding the positioning of this paper's contributions and the absence of two critical comparative studies.\", \"While TULiP establishes a theoretical framework, its implementation involves adding Gaussian noise to the pre-trained checkpoint to create multiple \\\"pseudo\\\" ensemble models. Additionally, the \\u201cearly checkpoint\\u201d in Algorithm 1 defaults to zero (Line 376), despite being labeled optional. This creates a disconnect between the theoretical foundation and the practical implementation, which makes it difficult for me to be convinced that the proposed theory sufficiently supports the implementation.\", \"By injecting Gaussian noise into the weights of the pre-trained checkpoint, TULiP generates varying predictive distributions, which are then combined with different OOD score functions. Although TULiP is presented as a post-hoc OOD detector, I believe it more appropriately belongs in the category of enhancement methods. The authors appear to agree with this perspective. A fair evaluation would require comparisons with other enhancement methods, such as ReAct [1] and RankFeat [2].\", \"The utility of TULiP is demonstrated through its application to OOD detection. However, its deployment may face significant limitations since it requires access to the weight parameters of each layer. Furthermore, TULiP's performance is inferior in far-OOD scenarios.\"]}",
"{\"summary\": \"This paper proposes a test-time post-hoc OOD detection method, which is theoretically driven by considering hypothetical perturbations applied to model parameters before convergence, allowing for the computation of an uncertainty score. The overall idea is interesting. However, there are a few points to be clarified.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper is overall well-written. The idea is theoretically driven and offers good interpretability. The method is thoroughly evaluated and demonstrates excellent performance compared to many OOD detection methods.\", \"weaknesses\": \"1.What's the motivation for calculating the upperbound of variations for uncertainty quantification? As shown in Eq 1. The objective is to estimate the variance given an different parameters initializations. To solve this, the DNN is first linearized locally with the NTK theory and the upperbound for introducing the changes are calculated with the NTK theory. The paradox is if the parameters can be already be perturbed, why NTK is needed for calculating the upperbound. Besides, calculating the upperbound will bring biased estimations of uncertainty. Another simple way to achieve this might be directly apply random perturbations to the network parameters (like random noises injection, dropout parameters), can easily get ensemble of neural network parameters. What is the advantage over these methods?\\n\\n2. Given that $\\\\lambda \\\\in\\\\{\\\\sqrt{o}, 3 \\\\sqrt{o}\\\\}$, where $o$ represents the number of output dimensions, why does Figure 4 only explore the range of $\\\\lambda$ values between 0 and 3 on ImageNet-200? The authors should consider exploring a broader range of this hyperparameter.\\n\\n3. The authors mention that TULiP is over three times faster than ViM, noting that ViM takes more than 30 minutes just to extract ID information on a recent GPU machine. However, it appears that the proposed method requires $M=10$ forward passes per sample for OOD detection. Compared to classic OOD detectors like EBO, does this imply that the detection speed of the proposed method is relatively slower?\\n\\n4. In the experiments, the authors calculated Equation 8 using 256 samples from the ID dataset (ImageNet-1K) and 128 samples per OOD dataset. However, the authors do not clarify how these 256 ID samples and 128 OOD samples were selected or whether OOD samples align with test samples. Additionally, did the authors know beforehand which samples were ID and OOD when using these samples?\\n\\n6. Have the authors considered the impact of different types of OOD data? For example, have the authors considered situations where OOD data is very far from ID data to improve detection of far-OOD.\\n\\n7.Why can Equation 11 be approximated by this way proposed by the authors? This approximation ($\\\\nabla_{\\\\boldsymbol{\\\\theta}} f_T^{emp}(\\\\boldsymbol{z}) \\\\boldsymbol{\\\\Gamma} \\\\approx \\\\nabla_{\\\\boldsymbol{\\\\theta}} f(\\\\boldsymbol{z})$) only considers the impact of params in each layer and does not account for the effect of the order of layers with the same params in network. Figure 1: a) Although it presents training trajectories under different params, it does not indicate which specific layer each color represents. The authors should conduct more experiments and theoretical analyses to explore this aspect.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"**W2: Why does Figure 4 only explore the range of\\u00a0$\\\\lambda$\\u00a0values between 0 and 3 on ImageNet-200?**\\n\\nWe would like to clarify that in Figure 4, the value of $\\\\lambda$ should be read as e.g., $2 \\\\sqrt{o}$ when the horizontal tick or label shows $2$. Therefore, in Figure 4, the considered range of $\\\\lambda$ is $0 \\\\sim 3 \\\\sqrt{o}$. In Figure 4 c) d), $\\\\lambda = 1.5 \\\\sqrt{o}$ is considered as it yields better performance on ImageNet-200. However, to keep a relatively small hparam range for easy tuning on a validation set, we have suggested the range $\\\\lambda \\\\in \\\\sqrt{o}, 3\\\\sqrt{o}$ and reported the results in our tables based on such choice.\\n\\n**W3: Concerns regarding computational efficiency.**\\n\\nAs you have pointed out, TULiP requires $\\\\mathcal{O}(M)$ forward passes. It is true that TULiP is slow in this sense. Nevertheless, forward passes are relatively cheaper, which does not necessarily mean TULiP is 10 times slower than, e.g., EBO. TULiP also gives better performance, as shown in Table 1, at the cost of computational efficiency, yet tractable under post-hoc scenarios. We will add the discussion to our revised manuscript.\\n\\n**W4: ID / OOD setups, especially in Figure 1.d).**\\n\\nIn all of our experiments, we consider the ID and corresponding OOD sets from the dataset pairings described in Section 5.2 (lines 446-453, 459-464). Since OOD datasets are manually selected for a given ID dataset, we know beforehand whether it is ID or OOD for a datapoint. We will revise the manuscript to improve the clarity. For Figure 1 d), all samples are uniformly sampled from corresponding datasets, and color represents its dataset (406-411).\\n\\n**W5: Impact of different types of OOD data.**\\n\\nIn this work, we did not consider special cases, e.g., very-far OOD data. Considering those special cases might indeed be beneficial for far-OOD performance, and it could be a promising research direction for future work.\\n\\nIn our current work, we aim to develop a theoretically-driven post-hoc OOD detector that works for all kinds of OOD data, either near-ID or far-ID. Indeed, in Table 1, TULiP's performance on far-OOD is by no means bad since ViM requires training data access and ASH comes with no theory and is sometimes unstable (e.g., Appendix C.3). At the same time, TULiP achieves remarkable result on near-OOD settings.\"}",
"{\"comment\": \"Thanks for conducting the suggested experiments.\\n* Based on the results utilizing ViT-B-16 as the backbone with ImageNet-1k as the ID dataset, it seems that TULiP can be regarded as an enhancement method for OOD detection such as ReAct [1] and RankFeat [2]. I suggest that the authors discuss these methods in the related work section to better contextualize TULiP's contribution.\\n* I reviewed Table 8 in Appendix C.4 and noticed that the baseline results were cited from [3]. I\\u2019m curious why you didn\\u2019t use the reproduced results from your machine, as generating them would only require a single forward pass.\\n\\n\\n[1] ReAct: Out-of-distribution Detection With Rectified Activations. NeurIPS, 2021. \\n[2] RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection. NeurIPS, 2022. \\n[3] Openood v1.5: Enhanced benchmark for out-of-distribution detection. NeurIPS, 2023.\"}",
"{\"comment\": \"Regarding W8: If TULiP is only applicable to limited architectures, it would be beneficial to highlight this in the limitations section.\", \"regarding_w9\": \"Thank you for clarifying the goal of TULiP. However, I believe there may have been a misunderstanding about my comments on [3]. I explicitly stated:\\n\\n`The purpose of TULiP is somewhat ambiguous\\u2014whether it aims to detect covariate-shift OOD samples or to demonstrate robustness against them. If the LATTER is the case, a fairer comparison would be with methods developed explicitly for covariate-shift OOD samples, such as [3].`\"}",
"{\"comment\": \"Thanks to the authors' response.\\n\\nMy concerns have been addressed and I will keep my rating\"}",
"{\"summary\": \"This work introduces TULIP (Test-time Uncertainty by Linearized fluctuations via weight Perturbation), a post-hoc out-of-distribution (OoD) score that leverages the epistemic uncertainty of a trained network. Grounded in the theoretical framework of linearized training dynamics, TULIP demonstrates effectiveness in detecting both semantic-shift and covariate-shift OoD scenarios.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This work proposes an uncertainty-based score to detect both semantic-shift OOD and covariate-shift OOD without accessing the training data.\", \"The derivation of the proposed bound for ||f_T(x)-\\\\hat{f}_T(x)|| and its upper bound are written down thoroughly.\"], \"weaknesses\": [\"Line 018: Could the authors clarify what is meant by \\u201cother problem settings\\u201d?\", \"Connection Between Concepts (Line72-73): The relationship between semantic shift and covariate shift in Out-of-Distribution (OOD) detection and epistemic uncertainty is not clearly motivated [1]. Could the authors provide further elaboration on this connection?\", \"Post-hoc OOD Detectors: The related work section appears somewhat outdated and incomplete. A notable aspect of TULiP is its ability to perform OOD detection without access to the training data, which aligns it with post-hoc detection methods. Additionally, TuLiP addresses both semantic-shift and covariate-shift OOD detection. Therefore, the related work should be expanded to include recent studies on post-hoc detectors for semantic-shift OOD [2] and covariate-shift OOD [3], respectively.\", \"Clarity of the Paper: The overall clarity of the paper could be improved. For instance, the section on the theoretical framework (particularly sections 3.1 and 3.2) could be condensed, as it does not represent a primary contribution of this work.\", \"The discussion on the bound for the ||f_T(x)-\\\\hat{f}_T(x)|| and its upper bound is appreciated. However, the implementation section (Section 4) is somewhat unclear. For example, the \\u201cearly checkpoint\\u201d in Algorithm 1 is set to zero by default (Line 376), though it is marked as optional. Additionally, the OOD score is based on Shannon entropy, which has been explored previously in [2]. The primary difference appears to be that [2] directly utilizes the predictive distribution from a trained network, whereas TULiP relies on perturbed predictions.\", \"Performance on ImageNet-1k: When evaluating OOD detection on large-scale benchmarks using ImageNet-1k as the in-distribution data, the method\\u2019s performance in terms of AUROC and FPR95 is worse than ASH (see Table 1). This discrepancy raises questions about the claim in the abstract regarding TULiP achieving state-of-the-art (SOTA) performance. Additionally, a discussion on why performance degrades with ImageNet-1k as the in-distribution data would be helpful.\", \"Optional Suggestion for Table 1: For improved readability, Table 1 could be divided into two sections: one for methods that do not require training data access, and another for those that do.\", \"Additional Architectures: Testing on alternative architectures, such as BiT [4] and ViT [5], would further demonstrate the robustness and effectiveness of the proposed approach.\", \"Results on Table 2: It seems inappropriate to compare methods designed specifically for semantic-shift OOD detection, as the current task focuses on covariate-shift OOD detection. Additionally, could the authors clarify the metric reported in Table 2? The purpose of TULiP is somewhat ambiguous\\u2014whether it aims to detect covariate-shift OOD samples or to demonstrate robustness against them. If the latter is the case, a fairer comparison would be with methods developed explicitly for covariate-shift OOD samples, such as [3].\", \"References\", \"[1] Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods. Machine Learning, 110(3):457\\u2013506, 2021.\", \"[2] GEN: Pushing the Limits of Softmax-Based Out-of-Distribution Detection. In CVPR, 2023.\", \"[3] Semantically Coherent Out-of-Distribution Detection. In ICCV, 2023.\", \"[4] Big transfer (bit): General visual representation learning. In ECCV, 2020.\", \"[5] An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.\"], \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"In addition, we briefly summarize the related revisions that we have uploaded, besides the ones mentioned above, as follows:\\n\\n1. We have modified the abstract for clarification.\\n2. lines 40-41: As you have pointed out, we emphasized the connection between epistemic uncertainty (EU) and OOD here.\\n3. Slightly simplified Section 3.\\n4. Revised line 272: We have removed the \\\"early checkpoint (optional)\\\" input as 1) Such checkpoints are not tractable under strict post-hoc setting, 2) It leads to confusion and 3) After further investigation, we found that the default setting ($\\\\theta_{t_s} = \\\\mathbf{0}$) achieves sufficient performance gain (see below). We deeply apprepreciate your rightful comments.\", \"table\": \"CIFAR-10 SS-OOD, with a ResNet-18 trained by SGD momentum=0.9 for 400 epochs (Top-1 acc around 86%).\\n| Checkpoints for $\\\\theta_{t_s}$ | near/far AUROC |\\n|--|--|\\n| No ($\\\\mathbf{0}$) | 81.01 / 81.58 |\\n| Yes (Epoch 19) | 81.05 / 81.96 |\\n\\n5. Revised lines 244-246: To emphasize our choice taking $t_s = 0$ and $\\\\theta_{t_s} = \\\\mathbf{0}$, we moved the corresponding description from the end of Section 4 to the beginning.\\n6. Revised Table 1 and lines 458-460: We have added GEN as baseline as well as TULiP + GEN. In short, TULiP+GEN boosts GEN performance on CIFAR-10 and ImageNet-1K.\\n7. Revised lines 426-428: We added the citation to [3] and addressed the concerns regarding covariate-shift OOD raised in [3] for enhanced clarity. We believe that this is the appropriate place to address such an issue, rather than Section 2, given that [3] has a different setting compared to our work (as discussed before).\\n8. lines 472-474: We emphasized the results shown in Appendix C.3 (W6).\\n9. Added Appendix C.4: We further investigate the performance of TULiP with ViT-B-16 on ImageNet-1K SS-OOD; please refer to the results in Table 8 of the revised manuscript:\\n\\n| Method | ImageNet-1K (ViT) SS-OOD (AUROC) |\\n| ------ | -------------------------------- |\\n| ViM | 77.03/92.84 |\\n| MDS | 79.04/92.60 |\\n| | ^ Requires training data |\\n| EBO | 62.41/78.98 |\\n| MLS | 68.30/83.54 |\\n| ASH | 53.21/51.56 |\\n| GEN | 76.30/91.35 |\\n| TULiP | 73.63/87.98 |\\n\\nIn summary, representative post-hoc methods (without training data access) fail to perform well overall, with a gap between methods with or without training data access can be observed. It could possibly be because of the significant architectural difference between transformers and CNNs. Please see the discussion in the appendix. We believe that additional exploration is mandatory for future research, perhaps by incorporating architectural knowledge of transformers, as we stated in the revised paper.\\n\\nWe hope that the above explanations will help address your concerns, and we look forward to your response. We sincerely appreciate your constructive comments and your time in reviewing our paper.\"}",
"{\"comment\": \"We greatly appreciate your graceful recognition of our paper and your kind response.\\nWe will provide additional discussion below, regarding your recent question.\\n\\nFor example, computer vision tasks like object detection (OD) and semantic segmentation (SS) are fundamental for autonomous driving, as they are the core building blocks to \\\"the eyes\\\" of an autonomous vehicle. As we noted in our paper, its crucial to know when the input is far from the training set (being OOD) (for OD, see [1]; for SS, see [2]), as deep NNs sometimes provides over-confident predictions, raising safety concerns. Post-hoc OOD detectors (including TULiP) can be easily attached to trained models (OD: mixed regression + classification, SS: classification), help the model to identify and prevent it from unwanted, potentially unsafe predictions.\\n\\nAs shown in above, being post-hoc (like TULiP) broadens the application of a method as it can be applied to pre-trained models. We will futher revise our manuscript by describing the practical scenario in detail.\\n\\n[1]. Unknown-Aware Object Detection: Learning What You Don\\u2019t Know from Videos in the Wild, CVPR'22 \\n[2]. Entropy Maximization and Meta Classification for Out-of-Distribution Detection in Semantic Segmentation, CVPR'21\"}"
]
} |
E4OcXAx5Dc | Private Learning Fast and Slow: Two Algorithms for Prediction with Expert Advice Under Local Differential Privacy | [
"Ben Jacobsen",
"Kassem Fawaz"
] | We study the classic problem of prediction with expert advice under the constraint of differential privacy (DP). In contrast to earlier work in this area, we are interested in distributed settings with no trusted central curator. In this context, we first show that a classical online learning algorithm naturally satisfies DP and then design two new algorithms that extend and improve it: (1) RW-AdaBatch, which provides a novel form of privacy amplification at negligible utility cost, and (2) RW-Meta, which improves utility on non-adversarial data with zero privacy cost. Our theoretical analysis is supported by an empirical evaluation using real-world data reported by hospitals during the COVID-19 pandemic. RW-Meta outperforms the classical baseline at predicting which hospitals will report a high density of COVID-19 cases by a factor of more than 2$\times$ at realistic privacy levels. | [
"differential privacy",
"online learning",
"prediction with expert advice",
"follow the perturbed leader"
] | Reject | https://openreview.net/pdf?id=E4OcXAx5Dc | https://openreview.net/forum?id=E4OcXAx5Dc | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"ysSUeWmjdi",
"xYK3QdcigZ",
"vfJKXGGj6E",
"vZSSc4Yspu",
"s9UN46r9uq",
"njtjrKOv41",
"mQOx3HuKKB",
"kKtuFLow7W",
"ipmN78QGpn",
"en6rwVlEbw",
"W2eE3RyybM",
"Rd4hApu59Q",
"PQCHRiwMS2",
"MQjbzTJPMe",
"LQeUBgcuZ4",
"FOb54Jyufd",
"EbCa0Fkamp",
"B5utgYwX7A",
"ABxi6XCQ07",
"9vSrQYRCQe",
"6odjr6bpTi",
"2k51Tc300u"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1737524216392,
1732551362285,
1732471438047,
1732551414094,
1733182449846,
1732469824570,
1732750971063,
1732533250788,
1732710548983,
1732471116752,
1729578438801,
1732508191015,
1734690998290,
1729824022785,
1732470406714,
1730290059274,
1732861866964,
1732762934459,
1732557712468,
1732471005948,
1732470812954,
1730352232359
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12800/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12800/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12800/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12800/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12800/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12800/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12800/Reviewer_2XWn"
],
[
"ICLR.cc/2025/Conference/Submission12800/Reviewer_qnzh"
],
[
"ICLR.cc/2025/Conference/Submission12800/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12800/Reviewer_qnzh"
],
[
"ICLR.cc/2025/Conference/Submission12800/Reviewer_Misn"
],
[
"ICLR.cc/2025/Conference/Submission12800/Area_Chair_nvzS"
],
[
"ICLR.cc/2025/Conference/Submission12800/Reviewer_Misn"
],
[
"ICLR.cc/2025/Conference/Submission12800/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12800/Reviewer_2XWn"
],
[
"ICLR.cc/2025/Conference/Submission12800/Reviewer_2XWn"
],
[
"ICLR.cc/2025/Conference/Submission12800/Reviewer_qnzh"
],
[
"ICLR.cc/2025/Conference/Submission12800/Reviewer_kNsQ"
],
[
"ICLR.cc/2025/Conference/Submission12800/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12800/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12800/Reviewer_kNsQ"
]
],
"structured_content_str": [
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Highlighting\", \"comment\": \"At the suggestion of reviewer 2XWn, we have added highlighting to draw attention to our changes. This appears to have slightly perturbed the vertical spacing of the text, forcing Table 1 to be placed on page 11. The current text *can* fit in 10 pages without the highlighting, as demonstrated by the previous version, and so we hope this minor violation of the style guide during the rebuttal process won't be an issue.\"}",
"{\"comment\": \"Thank you for your helpful comments and kind words about the importance of our problem! We respond to all of your questions and concerns one-by-one below:\\n\\n**W1:** We understand and sympathize with this frustration. One of the struggles in writing this paper has been that the main results we prove are very difficult to express in a clean, decontextualized way. We can say a lot about the privacy amplification of RW-AdaBatch both theoretically and empirically, but there\\u2019s no closed form expression for the exact tradeoff function that we can present as a standalone theorem. Similarly, the regret bound of RW-Meta is difficult to understand until the reader has the context to understand the meaning of the $\\\\Sigma^*$ matrix. We absolutely understand that this makes things more difficult for the reader, and as part of our revision have tried to do a better job of highlighting our results in a simple and direct way at the beginning of each section whenever possible (e.g. \\u201cWe show that the expected regret of RW-AdaBatch is at most $(1+\\\\sqrt{2}\\\\alpha)$ times greater than the expected regret of RW-FTPL\\u201d). To some extent, however, we fear that this weakness might be a necessary consequence of the kinds of results that we prove.\\n\\n**W2:** This is a valid observation, and so we have added in an explicit comparison with the FTPL algorithm of Agarwal and Singh 2017, which is the state of the art central-DP algorithm in the sorts of low-dimensional settings that we target. We agree that this comparison can help shed light on the cost of satisfying local DP, as well as the relative benefit of moving beyond the existing paradigm of data-independent experts used in prior work.\\n\\n**W3:** In the case of RW-AdaBatch, we wish to emphasize that the privacy gain we provide is very nearly free - there is only minimal computational overhead, and the impact on regret is provably insignificant. In that context, we would argue that even a moderate increase in privacy is noteworthy, particularly because a constant-factor improvement in privacy can normally only be achieved by a constant-factor degradation in utility.\\n\\nIn the case of RW-Meta, we would push back against the characterization that we improve utility only by a small constant factor. In terms of absolute performance, we improve over RW-FTPL by a factor of more than 2x in most settings, and improve over the state of the art central DP algorithm by 50% or more. We feel that this constitutes a significant improvement, especially because in many settings we actually achieve negative static regret - given the fundamental limitations of data-independent experts, the gap in performance we observe is almost certainly not surmountable by future algorithmic improvements to static experts algorithms.\\n\\n**Q1:** The regret is with respect to the non-noisy data - this is defined slightly earlier at the beginning of the utility analysis section (line 352 of the updated version).\\n\\n**Q2:** This is a very good question which gets to the heart of our technical contribution. The key issue is that we need our entire system to satisfy DP, not just a single component. If we want to identify learners that have done well on the data so far, we need to expend privacy budget to do that. But at the same time, if we want non-trivial learners that can change their predictions based on the data so far, then we need to expend privacy budget on that too. If we treat these two components separately, then we\\u2019ll be forced to split the privacy budget between them and accept some combination of A) less accurate learners overall or B) less precise identification of the best learners.\\n\\nRW-Meta can basically be thought of as a technique to avoid having to make that choice. We expend 100% of our privacy budget on making the learners as accurate as possible, but still manage to select strong learners with reasonable accuracy by exploiting some useful properties of Gaussian noise as well as the linearity of our gain functions. This gives us higher overall utility at the same privacy cost compared to just using RW-FTPL over the learners directly.\\n\\n**Q3:** Thank you for the suggestion! We have revised Table 1 to include a column for the best-performing linear model in each setting, alongside the newly added central-DP algorithms. \\n\\n**Q4:** This is a typo, thank you for catching it!\\n\\n**Q5:** Yes, this has been clarified in the revision.\\n\\n**Q6:** These are the same $\\\\alpha$ and $\\\\beta$ functions as in the immediately preceding lemma (now 4.2.2), defined in terms of the specific mixture of Gaussians that we prove is a lower bound for RW-AdaBatch\\u2019s tradeoff function. We have added some extra text to clarify this.\"}",
"{\"comment\": \"Absolutely! We have uploaded a draft with highlighting as suggested.\"}",
"{\"comment\": \"Lines 223-24 do indeed refer to an unspecified general function as described in the Problem Setting section. Our results require that the server is able to derive a Gaussian approximation of the true gain vector, but do not change based on how exactly that gets implemented in any given setting. We did not want to limit our focus to a single specific implementation in a way that would understate the generality of our method, which is why we have tried to state things in a less specific way (e.g. by simply saying that 'the server computes $\\\\tilde{g}_t$' in our pseudocode and describing in Section 3.4 the formal requirements this computation needs to satisfy).\\n\\nWe do appreciate that abstraction can make things less approachable. We actually meant for the example on line 168 to help with that issue by providing a simple, concrete instantiation to help readers think through our results, in the same way that a text might encourage the reader to think about $\\\\mathbb{R}^2$ while proving a theorem about vector spaces generally. We didn't mean to imply that our results only apply in that specific instance, particularly since Section 5 defines gain vectors in a different way entirely (as you observe).\\n\\nIn any case, we are grateful for the decision to reduce the confidence of your review even if the score did not change, and would like to thank you again for a very helpful and constructive discussion!\"}",
"{\"title\": \"Paper Revision\", \"comment\": \"Dear Reviewers and AC,\\n\\nWe appreciate your many thoughtful and insightful comments! We have uploaded a revised draft of our paper that should hopefully address all of the major issues raised. In addition to many smaller changes that we will describe in responses to individual reviews, we would like to draw the reviewers\\u2019 attention to three fairly significant changes from the original version:\\n\\n- We have separated the previous section on \\u2018Background and Related Work\\u2019 into two: a \\u2018Related Work\\u2019 section which is essentially unchanged from the first draft, and a \\u2018Problem Setting\\u2019 section which is largely new. We intend for this second section to clarify several points that were ambiguous in our first draft and also to better connect our presentation of background concepts to the particular settings we consider.\\n- At the suggestion of reviewers 2XWn and qnzh, we have performed additional experiments to compare the performance of our algorithms with the state-of-the-art FTPL algorithm of Agarwal and SIngh (2017), described in the Experiments section.\\n- We have expanded and rewritten our presentation of RW-AdaBatch to better clarify the specific nature of the privacy amplification we are studying.\\n\\nTo make room for these changes, we have moved the Limitations section to the appendix. We believe that our paper is much stronger now as a result of incorporating the reviewers\\u2019 suggestions, and greatly look forward to further discussion.\\n\\nWarm regards,\\n\\nThe authors\"}",
"{\"comment\": \"Oh, yes, our apologies - we did misunderstand your earlier question. The regret is measured with respect to the reward of running the best learner on the noisy data. In other words, all of the learners are required to satisfy differential privacy, and our goal is to identify the best overall private learner (which could still be quite bad if our privacy constraints are severe).\\n\\nVisually, you can see this in Figure 2. The yellow envelope in the far left column $(\\\\mu = \\\\infty)$ represents the performance of the learners on the non-noisy data. As $\\\\mu$ decreases, the performance of the learners decreases as well because they are forced to make their predictions using noisier data. Our main result for the utility of RW-Meta essentially says that the bold line can't be *too* far below the top of the yellow envelope in any given plot, but without specific assumptions about the learners we can't say anything about how adjacent plots in the same row will compare. \\n\\nPractically speaking, we try to control the gap between the noisy and non-noisy settings by using learners that are designed to be robust against additive Gaussian noise, which helps ensure that our regret bounds are still saying something meaningful about absolute performance even at high privacy levels.\"}",
"{\"title\": \"Comment on the changes\", \"comment\": \"Thanks for the rebuttal, could you please post a draft with the changes from the original submission highlighted to make the evaluation easier?\"}",
"{\"comment\": \"Thank you for your efforts in addressing my questions. While the absence of a concise expression of the final result makes it challenging to compare with existing methods theoretically, I appreciate the extensive empirical evidence presented that demonstrates the advantages of the proposed algorithms.\\n\\nI still have some questions regarding my first inquiry. It seems my earlier question was unclear, because the $g_i$'s should be the gain instead of the data. To clarify: I am interested in whether the regret is measured with respect to the reward of *running* the best learner on non-noisy data, where the learner\\u2019s actions at each round are based on non-noisy data.\\n\\nIf the answer is yes, could you briefly explain how this is achieved? According to line 363, the input to each learner is $\\\\tilde{g}_i$, which could lead to actions that differ significantly from those in the non-noisy scenario. I am curious about how you relate the rewards in these two settings in the proof.\"}",
"{\"comment\": \"**W4:** We agree that a formal lower bound would be a very interesting contribution and a promising direction for future work. Heuristically, it seems very plausible that regret is lower bounded by $O(\\\\sqrt{Tn\\\\log n})$ under local DP. This is because one of the key quantities used when analyzing regret in FTPL algorithms is the expected $\\\\ell_\\\\infty$ error in the cumulative sum of gain vectors, which can be lower bounded by $\\\\Omega(\\\\sqrt{Tn \\\\log n})$ using existing results for mean estimation under local DP[3; proposition 4]. Simultaneously, RW-FTPL is already able to achieve regret on that order, so no stronger lower bound is possible. That said, we admit that we do not have a formal proof for this claim yet.\\n\\nWe would also like to observe that non-trivial lower bounds on regret were not proved in the central setting until Asi et al. (2023b), over a decade after research on continual learning under differential privacy was initiated. Given the relative scarcity of prior work on the local setting, we hope that the absence of a lower bound in this paper is not seen as a fatal weakness.\\n\\n[3] Duchi, John C., Michael I. Jordan, and Martin J. Wainwright. \\\"Local privacy, data processing inequalities, and statistical minimax rates.\\\" arXiv preprint arXiv:1302.3203 (2013).\\n\\n**Q1:** This can be done using the primal-dual characterization of $f$-DP, which we allude to in corollary 4.2.1. Concretely, a mechanism satisfies $\\\\mu$-GDP if and only if it satisfies $(\\\\varepsilon, \\\\delta(\\\\varepsilon))-DP$ for all $\\\\varepsilon \\\\geq 0$, where $\\\\delta(\\\\varepsilon) = \\\\Phi(-\\\\varepsilon/\\\\mu + \\\\mu/2) - e^\\\\varepsilon \\\\Phi(-\\\\varepsilon/\\\\mu - \\\\mu/2)$ [4; Corollary 2.13]. So, given epsilon, we would solve numerically for the value of $\\\\mu$ that gives our desired delta and then set the noise scale as $\\\\eta = \\\\Delta/\\\\mu$.\\n\\n[4] Dong, Jinshuo, Aaron Roth, and Weijie J. Su. \\\"Gaussian differential privacy.\\\" Journal of the Royal Statistical Society: Series B (Statistical Methodology) 84.1 (2022): 3-37.\"}",
"{\"summary\": \"This paper addresses the problem of prediction with expert advice under local differential privacy, proposing two algorithms based on the classical \\\"Prediction by random-walk perturbations\\\" algorithm: (1) RW-AdaBatch, which enhances privacy by batching incoming data; and (2) RW-Meta, which adapts to data shifts by selecting from multiple candidate learners. The authors provide both theoretical analysis and empirical evaluation, demonstrating the advantages of the proposed algorithms.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Online prediction with expert advice is a fundamental problem in online learning. The investigation of this problem under local privacy constraints is crucial in both theory and practice due to the sensitive nature of machine learning tasks.\\n2. The authors substantiate their claims with rigorous theoretical analysis and empirical evaluation, demonstrating the high performance of the proposed algorithms.\", \"weaknesses\": \"1. The absence of a main theorem (like Theorem 2 in [1]) summarizing the regret of the proposed algorithms (in terms of $\\\\varepsilon, \\\\delta, n$ and $T$) limits the reader's ability to digest the results and identify the contributions effectively.\\n2. The paper lacks a comparison with existing private online learning algorithms. Though they are not designed for the setting considered here, a comparative analysis with existing private online learning algorithms could provide valuable insights into the privacy-utility tradeoff, particularly regarding the additional costs incurred when transitioning from central to local differential privacy.\\n3. It seems the proposed algorithms only improve the privacy/utility by a small constant factor, which may not be significant.\\n\\n[1] Hilal Asi, Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private online prediction from experts:\\nSeparations and faster rates. In The Thirty Sixth Annual Conference on Learning Theory, pp.\\n674\\u2013699. PMLR, 2023.\", \"questions\": \"1. It was stated that the regret of RW-Meta in (3) is with respect to the best learner in the candidates (line 355). Could you clarify whether the regret is with respect to the gain of the best learner on non-noisy data (i.e., $g_1,\\\\dots, g_T$) or on noisy data (i.e., $\\\\tilde{g}_1,\\\\dots,\\\\tilde{g}_T$)?\\n2. The goal of RW-Meta is to choose a learner and follow its action at each time step. It seems this can be done by running RW-FTPL over the set of learners. Why not just run RW-FTPL over the set of learners?\\n3. Why did you compare RW-Meta to RW-FTPL in Table 1? I think it would be better to compare RW-Meta to the Linear Models instead of RW-FTPL. As shown in Figure 2, Linear Models outperform RW-FTPL a lot for all $\\\\mu$'s. The performance of RW-Meta should largely rely on these Linear Models. Thus, listing the performance of the linear models would be more meaningful.\\n4. In line 161, does it mean that $v_{(k)}$ is the $k$-th smallest element? Since the gap $v_{(n)} - v_{(n-1)}$ seems to be the largest value minus the second largest value.\\n5. In line 195, is the distribution $n$-dimensional Gaussian?\\n6. What are the functions $\\\\alpha$ and $\\\\beta$ in Corollary 3.2.1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for the detailed response. I have carefully read the rebuttal and the comments from other reviewers. I will maintain my original score.\"}",
"{\"metareview\": \"The paper presents two algorithms for LDP learning with expert advice, and analyses their performance both theoretically and empirically.\\n\\nIn terms of strengths, the paper addresses a relevant problem and the methods are supported with theoretical as well as empirical evaluations.\\n\\nIn terms of weaknesses, individual reviewers raise a number of concerns ranging from unclear presentation of the distributed setting and lack of clear main theorem, general unpolished presentation and lack of lower bounds.\\n\\nOverall, none of the reviewers strongly support the paper, while two support rejection even after the rebuttal and revision. In light of this, it is clear that the paper does not meet the bar for acceptance to ICLR.\", \"additional_comments_on_reviewer_discussion\": \"All the reviewers reacted to the author rebuttal and noted that while some of their concerns were addressed, others remained. There was no further private discussion as the decision was clear.\"}",
"{\"summary\": \"The paper studies the problem of distributed online prediction with expert advice under local DP constraints. They propose two algorithms RW-AdaBatch and RW-Meta. The paper provides a theoretical analysis of the proposed algorithms. Additionally, the paper provides experimental results using real-world data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written. The background and related previous work are clearly explained. The algorithms (RW-AdaBatch and RW-Meta) are described in detail.\\n\\n\\n2. The paper includes experiments on real-world data from the COVID-19 pandemic.\", \"weaknesses\": \"1. The distributed setting is not fully explained. It is unclear how multiple players cooperate together in this distributed setting. Can they share their observations with others? Is there any communication between them?\\n\\n\\n2. Recent work on differentially private prediction with expert advice includes results for both pure $\\\\varepsilon$-DP and approximate $(\\\\varepsilon,\\\\delta)$-DP [1,2], which are also cited in this paper. However, this paper only provides privacy guarantees for approximate DP. Can RW-AdaBatch or RW-Meta be extended to pure DP as well? If not, could you elaborate on the challenges involved?\\n\\n\\n[1] Asi, Hilal, et al. \\\"Private online prediction from experts: Separations and faster rates.\\\" The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023.\\n\\n[2] Asi, Hilal, et al. \\\"Near-optimal algorithms for private online optimization in the realizable regime.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n\\n3. The paper states, *\\\"recent work has shown that very private algorithms can be forced to incur $O(T)$ regret by adaptive adversaries (Asi et al., 2023b). We therefore focus exclusively on oblivious adversaries in this work.\\\"* This statement is somewhat misleading and may benefit from clarification. Asi et al. (2023b) show linear regret for the pure DP case, while it is still possible to achieve sub-linear regret for approximate DP. \\n\\n\\n4. The paper does not provide a regret lower bound for the problem.\", \"questions\": \"How should noise scale $\\\\eta$ be set to ensure that RW-AdaBatch or RW-Meta is $(\\\\varepsilon,\\\\delta)$-DP?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your insightful questions and positive comments about the strengths of our paper! We address the weaknesses and questions you list one-by-one below:\\n\\n**W1:** Beyond the medical forecasting task that we consider in our evaluation, we also believe our methods could be of interest for forecasting population movement and regional energy usage. Local differential privacy is attractive in these settings because individual level records can be highly revealing, while prediction with expert advice is a natural model for any forecasting task because it\\u2019s relatively easy to observe aggregate behavior after the fact. We have altered the description of our contributions in section 1.1 to highlight these other domains.\\n\\nSeparately, because prediction with expert advice is one of the most fundamental problems in online learning, we believe it is a natural starting point to explore new methods or proof techniques for online learning generally. We have included some discussion along these lines in Appendix A.1.\\n\\n**W2:** We have added a section on computational complexity for both RW-AdaBatch and RW-Meta. While both algorithms involve non-trivial computations as subroutines, these computations don\\u2019t necessarily become more complex as the dataset grows in size. For instance, RW-AdaBatch only ever needs to find a root of a one-dimensional function regardless of how large $n$ and $T$ are. In fact, it should generally become computationally cheaper as $T$ increases because batch sizes generally increase over time, and no meaningful computation has to be performed in the middle of a batch.\\n\\nMeanwhile, RW-Meta only needs to find the eigenvector corresponding to the maximum eigenvalue rather than the whole eigensystem, which is substantially cheaper. Asymptotically speaking, the biggest challenge for scalability is that RW-Meta requires $O(m^2 + mn)$ memory and computation simply to compute and store the matrices it uses. However, the fact that the regret of RW-Meta with respect to the best learner is $O(\\\\sqrt{m})$ in the worst case is already a good reason to avoid taking $m$ too large. In our own evaluation, we found that by far the biggest computational bottleneck came from constantly re-fitting our rolling regression models, while the metalearning itself took only a few milliseconds per iteration.\\n\\n**W3:** This is a very good question, and we have revised our presentation of our utility bounds for RW-FTPL and RW-Meta to make this more explicit in the paper itself. In both cases, the regret bounds are expressed in terms of the noise scale $\\\\eta$ which implicitly depends on both our privacy parameter $\\\\mu$ and the sensitivity $\\\\Delta$. In some contexts (including our Covid evaluation), $\\\\Delta$ is a fixed constant that doesn\\u2019t depend on $n$, and so we do in fact recover regret bounds that are within a multiplicative factor (based on $\\\\mu$) of the non-private baseline. In the worst case, however, $\\\\Delta$ can be as large as $\\\\sqrt{n}$ and we get a regret bound of $O(\\\\sqrt{Tn\\\\log n})$. This matches known lower bounds on expected $\\\\ell_\\\\infty$ error for mean estimation under local DP[1; Proposition 4].\\n\\n**Q1:** There may be some misunderstanding here: the batch size $B$ isn\\u2019t directly set as a hyperparameter. During runtime, RW-AdaBatch selects batch sizes dynamically based on the current gap between the top 2 experts as well as our maximum tolerance for errors, which we set through the hyperparameter $\\\\alpha$. This computation gets repeated every time a batch ends and a new batch size must be chosen. We have added a new section to the appendix (A.3) presenting this in more explicit detail.\\n\\nIn practice, we find that the algorithm is not particularly sensitive to the exact choice of $\\\\alpha$ - with $\\\\alpha=0.01$, we have never witnessed RW-AdaBatch suffer higher regret than RW-FTPL, and setting $\\\\alpha$ as high as 1 gives only very small improvements in privacy. We would therefore recommend just using $\\\\alpha=0.01$ in most applications without any tuning.\"}",
"{\"summary\": \"The paper considers online learning under local differential privacy (LDP), focusing more specifically on prediction with expert advice under full information setting in the oblivious adversary model. The authors propose 2 LDP algorithms and one that works with centralised DP, and analyze the utility and privacy of the proposed methods. The authors empirically test the proposed LDP methods on a real data example.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"i) While the paper continues a well-established line of research on DP continual learning, it focuses on the LDP setting, which has fewer contributions.\\n\\nii) The writing is generally good, although there are some major caveats as described in rest of this review.\\n\\niv) Relaxing the assumption of having a trusted central party can be important.\", \"weaknesses\": \"i) The paper completely brushes over many details of the problem and of the proposed solutions, which makes it unacceptably cumbersome and error prone to read. For example, while the stated focus of the paper is a distributed setting with LDP, this is not easy to notice from the writing: the proposed algorithms and definitions do not explicitly mention any separate parties, nor communication steps or clearly state which party does what. This generally makes it bothersome to try and check how the proposed algorithms actually fit into the stated setting.\\n\\nii) Some of the claimed contributions seem inaccurate and somewhat overstated (see Questions below for details).\\n\\niii) The paper omits some empirical comparisons to existing baselines (see Questions below for details)\", \"questions\": \"### Update after discussion\", \"i_still_recommend_rejecting_the_paper_as_it_currently_stand\": \"as I have mentioned in the comments, especially after the edits, the paper feels very unfinished to the point of being hard to understand. I therefore cannot recommend accepting the paper, as I am unsure if I have understood the presented work correctly based on the writing. I have lowered my confidence to better reflect this uncertainty.\\n\\n### Comments before discussion\", \"questions_and_comments_in_decreasing_order_of_importance\": \"1) Especially Sec2: currently, it is unnecessarily hard to try and figure out some basic assumptions you use. Please explicitly define what are neighbouring distributions and which neighbourhood relation you use, i.e., what do you actually try to protect with DP.\\n2) On the adaptive batching and resulting privacy: based on the abstract and stated contributions, I find it very surprising that the batching does not actually give any amplification in the LDP setting. Please rewrite the related sections to make this clearer from the beginning.\\n3) Related to the previous comment, as the adaptive batching algorithm assumes a trusted central party, its empirical performance should be compared to the existing methods that assume the same setting, e.g., Asi et al. 2023 (cited in the current paper).\\n4) Please explicitly consider your chosen setting when formulating the algorithms and the discussion.\\n5) As per the [note on arXiv](https://arxiv.org/abs/1802.02638), Ullman 2018 citen in the current paper has been withdrawn by the author. Please check the reference and update to the new version as instructed by the author.\\n6) Lines 313-14: why is $<x_{t,i}, \\\\tilde g_{t}>$ unbiased estimate? Do you assume something specific on the learners?\\n\\n\\n## Minor comments etc. (no need to comment or acknowledge)\\n\\n* Please fix typos: lines 121-22 extra dot after Jain et al.\\n* Lines 192-93: mention what is $G_{t-1}$ .\\n* Lines 308-09: should be learner $i$, not each learner?\\n* Alg 2 seems to be missing $\\\\tilde g_0$.\\n* Lines 86-87: I would not understand what LDP means from this definition.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"I have can see no specific ethical issues with paper.\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Final comments\", \"comment\": \"Thanks for the rebuttal and for the effort in updating the draft and doing the highlights. Unfortunately I am not eager to change my score at this point:\\n\\nDespite the changes (and partly because of the new additions), several parts of the paper feel more like a draft or a workshop version; for example, the distributed setting still feels more like an after though (e.g. there is no explicit threat model, many specifics are simply not mentioned), things are often stated in a very general and non-specific way which makes it hard to pinpoint what exactly happens \\n(e.g. Alg 1 lines 223-24 server computes $\\\\tilde g_t \\\\sim \\\\mathcal N(g_t, \\\\eta^2I_n )$: how is this calculated, is this some unspecified general function $g_t=(f(\\\\tilde g(D_{1,t}),\\\\dots,\\\\tilde g(D_{?,t})))$ as on lines 160-161, or, writing $S_t$ for the set of clients chosen at step $t$ for updating, should this be $\\\\sum_{c \\\\in S_t} \\\\tilde g(D_{c,t})$, which could make sense comparing to lines 234-35 but is not written out anywhere, or something else?), and some things are simply stated in a confusing manner (e.g. in the Problem Setting, lines 168-172 very much give the impression that up to the experiments in Sec 5, everything is based on 1 single client communicating at any given time step, while the algorithms have unspecified number of clients in plural sending updates at each time step).\\n\\nAlso as a minor addition, for ease of reading, it might be helpful to add a table with all the notations somewhere. \\n\\nI would encourage the authors to spend some time and really focus on writing the paper clearly, as the contribution otherwise, as far as I can judge from the current version, seems nice. I have lowered my confidence score to better reflect the uncertainty I currently have about the paper.\"}",
"{\"comment\": \"Thank you for your explanation, which addressed my concern. I recommend including a brief discussion on this point in the final version.\\n\\nI have raised my score accordingly.\"}",
"{\"comment\": \"Thank you for clarifying these points. Like the other reviewers, I still find the results somewhat hard to digest, although I believe the author has made their best effort. I will maintain a positive review with moderate confidence.\"}",
"{\"comment\": \"Thank you for your positive comments and insightful questions! We address all of your questions one-by-one below:\\n\\n**W1:** We have reorganized the introductory sections of our paper to address this issue. In particular, we now have an explicit section on our Problem Setting which describes the specific communication model we have in mind. To answer the questions specifically, we do not consider any communication or coordination between clients: at each time step, they merely compute private functions of their local data and send them to the central server for post-processing into noisy gain vectors.\\n\\n**W2:** This is a really interesting question! Currently, our privacy analysis treats all of our algorithms as post-processings of Gaussian mechanisms, which don\\u2019t satisfy pure DP for any value of epsilon. So, if we wanted to provide pure DP guarantees, we would need to either change our algorithms or change something about our analysis.\\n\\nOn the algorithmic side, the most obvious extension would be to replace our Gaussian noise with Laplacian. This would work reasonably well for RW-FTPL - the sum of independent Laplacians should converge to be approximately Gaussian fairly quickly, and so we would likely end up with a similar overall regret bound plus some error term. Conceptually, we could also imagine deriving a variant of RW-AdaBatch for this setting, but the proof techniques would look different (in particular, the analogue to continuous Brownian motion would no longer hold), and we expect that the overall level of privacy amplification would be weaker because Laplacian tails are heavier than Gaussian tails, necessitating more conservative batch sizes. Finally, we are skeptical that this approach would work at all for RW-Meta - our design and evaluation of that algorithm relies very heavily on the particular algebraic properties of Gaussians in a way that would be difficult to extend to any non-stable distribution.\\n\\nOn the analysis side, the most plausible approach to our eyes would be to set aside local DP and focus on satisfying pure DP in the central setting. For a fixed time horizon, the set of possible outputs our algorithm can produce is finite (if exponentially large), and so it should certainly be true that our outputs satisfy pure DP for some finite epsilon. It\\u2019s less clear whether they could be made to satisfy pure DP with a small epsilon, however, or whether those guarantees could hold over arbitrary time horizons: there\\u2019s been some interesting recent work in this vein on the pure DP properties of Gaussian Noisy Max [1], but directly translating those results to our setting and using composition theorems would lead to an extremely high privacy cost very quickly. Getting a stronger guarantee would almost certainly require analyzing the entire sequence of outputs as a unit instead of reasoning timestep-by-timestep as we do currently.\\n\\nOverall, our feeling is that it is probably easiest to provide pure DP guarantees when designing algorithms in the FTRL regime, as in both recent papers by Asi et al. In the context of FTPL, there are a lot of reasons to want to use Gaussians, including stronger utility guarantees [2; Theorem 3.15], and that naturally pushes us towards approximate DP.\\n\\n[1] Lebensold, Jonathan, Doina Precup, and Borja Balle. \\\"On the Privacy of Selection Mechanisms with Gaussian Noise.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2024.\\n\\n[2] Lee, Chansoo. Analysis of perturbation techniques in online learning. Diss. 2018.\\n\\n**W3:** While Asi et al. (2023b) do prove lower bounds that are specific to pure DP, Theorem 9 and Theorem 11 of that paper show that approximate DP algorithms can also be forced to suffer linear regret by adaptive adversaries in some circumstances. We are open to being corrected on this, but at the moment we believe that the statement accurately reflects the results in that paper.\"}",
"{\"comment\": \"Thank you for your many helpful and constructive suggestions! We largely agree with all issues raised and have done our best to address them in the revision. We respond point-by-point below:\\n\\n**W1:** This is a very valid criticism, and we appreciate being pushed to be more exact! We have substantially revised several aspects of our presentation to make it more clear how our algorithms fit into our target setting. In particular, we have added a new Problem Setting section that addresses these questions directly and better connects our presentation of background material with our particular setting. As part of this, we also formally state the model of communication we consider and have incorporated this model into the presentation of our algorithms. We hope that these changes are able to address all of the issues you raise.\\n\\n**Q1:** We agree that these details were not explicit enough in our original paper. We have revised our presentation of differential privacy to be more specific to our particular setting. In particular, we now define adjacent datasets and DP explicitly in terms of the local statistics used to compute gain functions in our setting. We have also added an explicit statement in our evaluation section describing the precise adjacency definition we use for Covid data (i.e. two datasets are adjacent at a given time step if they differ only in whether an individual went to the hospital or in the hospital that individual went to).\\n\\n**Q2:** We have rewritten the relevant section of our contributions to make it more clear that the amplified privacy guarantees of RW-AdaBatch refer specifically to its outputs. More importantly, we have significantly rewritten and expanded the preamble to the privacy analysis of RW-AdaBatch to draw a more explicit connection between our work and the established literature on privacy amplification by shuffling, which studies the circumstances in which local DP algorithms can satisfy central DP with stronger privacy parameters. Our revisions should hopefully clarify that this is the specific form of privacy amplification we are interested in.\\n\\n**Q3:** We politely disagree with the characterization that RW-AdaBatch assumes a trusted central party. Although our analysis is focused on its amplified central-DP guarantee, the algorithm satisfies local DP with exactly the same parameters as RW-FTPL, and therefore does **not** require any additional trust assumptions.\\n\\nThat said, reviewer qnzh has also commented on the lack of quantitative comparison with prior work in the central model, and so we have performed some additional experiments comparing our algorithms with the FTPL algorithm of Agarwal and Singh 2017. We did consider evaluating against Asi et al. 2023, but their algorithm is heavily tuned for high dimensional data and large time scales in a way that would make fair comparison very challenging. In contrast, Agarwal and Singh represent the current state of the art in the sorts of low-dimensional settings we target, and have the added benefit of naturally satisfying Gaussian DP.\\n\\n**Q4:** We have done our best to incorporate this suggestions as described above.\\n\\n**Q5:** Thank you for letting us know! We have updated the reference accordingly.\\n\\n**Q6:** We are not making any specific assumption here. The gain of learner $i$ is defined to be $\\\\langle x_{t,i}, g_t \\\\rangle$, and so the fact that $\\\\langle x_{t,i}, \\\\tilde{g}_t \\\\rangle$ is an unbiased estimate follows from the fact that $\\\\tilde{g}$ follows a multivariate normal distribution with mean $g_t$.\"}",
"{\"summary\": \"The paper introduces two algorithms, RW-AdaBatch and RW-Meta, for prediction with expert advice under the constraints of local differential privacy (LDP). The primary objective is to enable prediction in the LDP setting. RW-AdaBatch is designed for static environments and enhances privacy by adaptively batching data points, while RW-Meta uses meta-learning to improve predictions in dynamic environments. The paper validates these algorithms through theoretical analysis and empirical testing on COVID-19 hospitalization data, showing significant improvements in prediction accuracy under realistic privacy constraints.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-motivated, addressing a practical and novel problem. It introduces a classical method for solving privacy-preserving problems and proposes variations to address specific challenges.\", \"The writing is clear and well-structured.\", \"Both algorithms achieve near-optimal regret bounds (as claimed) and are supported by detailed privacy analyses.\", \"The experiment improvements seems significant.\"], \"weaknesses\": [\"Can the authors provide specific cases where prediction with expert advice under LDP would be essential?\", \"The computational cost of RW-AdaBatch and RW-Meta appears substantial due to their batched nature and eigen value operation, potentially limiting scalability to very large datasets. Additionally, the datasets used are moderate in size. Can the authors provide a complexity analysis for computation and memory?\", \"I am not deeply familiar with prediction with expert advice, so a direct comparison of the regret achieved by these algorithms and previously established ones would be helpful. The bounds claimed to be near-optimal seem to compare with non-private lower bounds that do not involve any privacy parameters ($\\\\varepsilon$, $\\\\delta$, $\\\\mu$). Can the authors comment on that? What are some LDP related lower bounds?\"], \"questions\": [\"How to tune the hyperparameter $B$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}"
]
} |
E4NShSRRDP | Contrastive Learning Via Equivariant Representation | [
"Sifan Song",
"Jinfeng Wang",
"Qiaochu Zhao",
"Xiang Li",
"Dufan Wu",
"Angelos Stefanidis",
"Jionglong Su",
"S Kevin Zhou",
"Quanzheng Li"
] | Invariant Contrastive Learning (ICL) methods have achieved impressive performance across various domains. However, the absence of latent space representation for distortion (augmentation)-related information in the latent space makes ICL sub-optimal regarding training efficiency and robustness in downstream tasks. Recent studies suggest that introducing equivariance into Contrastive Learning (CL) can improve overall performance. In this paper, we revisit the roles of augmentation strategies and equivariance in improving CL's efficacy. We propose CLeVER (Contrastive Learning Via Equivariant Representation), a novel equivariant contrastive learning framework compatible with augmentation strategies of arbitrary complexity for various mainstream CL backbone models. Experimental results demonstrate that CLeVER effectively extracts and incorporates equivariant information from practical natural images, thereby improving the training efficiency and robustness of baseline models in downstream tasks and achieving state-of-the-art (SOTA) performance. Moreover, we find that leveraging equivariant information extracted by CLeVER simultaneously enhances rotational invariance and sensitivity across experimental tasks, and helps stabilize the framework when handling complex augmentations, particularly for models with small-scale backbones. | [
"Contrastive Learning",
"Self-Supervised Learning",
"Equivariant Contrastive Learning",
"Invariant Contrastive Learning"
] | Reject | https://openreview.net/pdf?id=E4NShSRRDP | https://openreview.net/forum?id=E4NShSRRDP | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"tg5sWsA4v9",
"oys2M1DtAV",
"nQddUa4fRq",
"hye6N224jd",
"QgzOozEh6o",
"FgArFCGrkV"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"official_review",
"decision",
"meta_review"
],
"note_created": [
1730188709183,
1730366146066,
1730624449340,
1730674794899,
1737523715739,
1734448353206
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5605/Reviewer_VjBc"
],
[
"ICLR.cc/2025/Conference/Submission5605/Reviewer_hk6G"
],
[
"ICLR.cc/2025/Conference/Submission5605/Reviewer_UQk5"
],
[
"ICLR.cc/2025/Conference/Submission5605/Reviewer_dxxT"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission5605/Area_Chair_mp7b"
]
],
"structured_content_str": [
"{\"summary\": \"In this paper, the authors introduce a new regularization term into equivariant contrastive learning to avoid trivial solutions. Empirically, the regularization term improves the performance of contrastive learning across various downstream tasks and different backbones.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow. The main motivation, i.e., the original equivariant loss will lead to trivial solutions, is clear and the solution is straightforward.\\n2. The experiments are comprehensive, and the proposed objective shows benefits in different scenarios.\", \"weaknesses\": \"1. It seems that the main difference between CleVER and DDCL lies in the regularization term. However, is there any evidence to show that DDCL obtains trivial solutions in practice and obtains inferior performance? It would be better to add more discussions about the effectiveness of the regularization term and why it works.\\n2. In downstream tasks, the authors use a combination of equivariant and invariant factors. However, as we have no access to the properties of downstream tasks, how can we decide which kind of factions to rely on?\\n3. It seems that the empirical improvements are a little marginal, especially compared with DDCL. It would be better to add more discussions about the advantages of CleVER.\\n4. I think the ablation study on the balance of three terms in the pretraining objective is necessary. It would be better to show the advantages and disadvantages of invariant and equivariant features.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces an Equivariant contrastive learning method called CLeVER. Inspired by DDCL, CLeVER disentangles representations into Invariant Representations and Equivariant Factors. It performs contrastive learning on the Invariant Representations while ensuring orthogonality of the Equivariant Factors. A regularization loss is utilized to prevent trivial solutions. Experimental results demonstrate that this method achieves excellent performance.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"This paper is well-motivated. It is important to explore equivariance of contrastive learning.\", \"CLeVER proposed in this paper is concise and intuitive.\", \"Extensive experiments demonstrate the effectiveness of CLeVER.\"], \"weaknesses\": \"1. The innovation of CLeVER is minimal. CLeVER without regularization loss is very similar to DDCL. From my perspective, CLeVER without regularization loss simply changes the way $L_{CL}$ and $L_{Orth}$ are computed in DDCL. Moreover, the computation of $L_{CL}$ and $L_{Orth}$ is also very common.\\n2. Section 3.3 primarily introduces the backbone used by CLeVER. I believe this section is irrelevant to the method and should be moved to Section 4 as part of the experimental settings.\\n3. Table 2 conveys too much information, making it difficult to grasp the intended message in Section 4.2.\\n - I suggest breaking down Table 2 into several smaller tables based on the discussions in Section 4.2. Each smaller table should focus on a specific aspect of the research, such as the study of the backbone, a comparison of DDCL (CLeVER) with or without $L_{PReg}$, a comparison between DDCL and CLeVER, a comparison of CLeVER training epochs, and so on.\\n - Additionally, there is some redundant information in Table 2 that is not discussed in the main text, such as the results of SimCLR, Debiased, BYOL, MoCo, MoCo V2, and RefosNet.\\n4. Comparing only CLeVER and DDCL in this paper is insufficient. I believe it would be better to compare CLeVER with more ECL methods, such as [1], [2], [3].\\n \\n [1] Dangovski, Rumen, et al. \\\"Equivariant contrastive learning.\\\"\\u00a0*arXiv preprint arXiv:2111.00899*\\u00a0(2021).\\n \\n [2] Xie, Yuyang, et al. \\\"What should be equivariant in self-supervised learning.\\\"\\u00a0*Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2022.\\n \\n [3] Gupta, Sharut, et al. \\\"Structuring representation geometry with rotationally equivariant contrastive learning.\\\"\\u00a0*arXiv preprint arXiv:2306.13924*\\u00a0(2023).\", \"questions\": \"1. Section 3.2 mentions \\\"we consider a function f: X \\u2192 Y to be T-equivariance when it satisfies Eq. 6. We call it T-equivariance when f satisfies Eq. 7.\\\" I believe that satisfying Eq. 6 does not constitute T-equivariance. Is there a typo present here?\\n2. Table 3 and Table 4 respectively analyze the effect of equivariance on the robustness of SimSiam and DINO. However, why is DDCL used to analyze SimSiam while CLeVER is used to analyze DINO? I believe it would be more reasonable to analyze SimSiam and DINO using the same method.\\n3. In Table 3 and Table 4, some results are shaded in gray. What is the purpose of this shading? Additionally, the best results in the shaded columns are not bolded.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This study presents a novel equivariant contrastive learning framework, termed CLeVER, which is compatible with augmentation strategies of varying complexities across various mainstream contrastive learning backbone models. Specifically, CLeVER enhances the DDCL method by mitigating the instability in the training process that can result in trivial solutions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper is motivated by the important problems with existing equivariant contrastive learning and introduces a regularization term designed to mitigate collapse problem in DDCL.\", \"weaknesses\": \"The novelty of the proposed method is limited, as the primary difference from DDCL lies solely in the incorporation of a regularization term, which is applicable only to DDCL. Considering this aspect, the contribution of the study appears significantly constrained.\\n\\nFurthermore, there has been no analysis of the collapse phenomenon that occurs within the DDCL framework. The study does not provide a thorough examination of why the proposed regularization term is the most effective solution for addressing the collapse issue in DDCL.\\n\\nThe prevention of representation collapse in contrastive learning has been a longstanding area of investigation. However, there is currently a lack of comprehensive reviews summarizing these studies. It is essential to explicitly delineate how CLeVER distinguishes itself from existing research. Therefore, a detailed section on related works should be included to address these aspects.\\n\\nAdditionally, while the baseline framework employed is DINO, the effectiveness of the method has not been validated against alternative frameworks such as SimCLR or Barlow Twins.\\n\\nMoreover, there is a lack of benchmark comparisons with other equivariant contrastive learning methods.\", \"questions\": \"How does addressing the collapse problem in DDCL contribute to the improvement of equivariance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes CLeVER, an ECL framework which adds regularization to the decoupled contrastive learning representations that is previously proposed, encouraging similarity on the invariant part of positive pairs and orthogonality on their equivariant part, and regularizing on the distance between the normed logits of the two parts.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed regularization does improve DINO across multiple backbones.\\n\\n2. The analytical results and visualizations are helpful. The experiment section is thoughtful and covers various aspects for better evaluation.\\n\\n3. This paper is easy to follow.\", \"weaknesses\": \"1. This paper proposes a novel regularization based on intuition, but does not discuss any theoretical insights. Considering Table 1, when minimizing $|\\\\|\\\\|h_V\\\\|\\\\|-\\\\|\\\\|h_I\\\\|\\\\||$, why $h_V$ is properly regularized, with values increasing toward that of $h_I$, but $h_I$'s do not decrease much, as moving toward $h_V$ also helps minimize the regularization loss? Moreover, the authors do not report such table on CLeVER for us to compare with Table 1 to check the improvements.\\n\\n2. Based on Table 2 results, despite the continual improvements on DINO, the authors do not study the effectiveness and the ability to generalize of CLeVER on other contrastive methods besides DINO and DDCL. I am skeptical about whether it can work well with other similar methods such as Barlow Twins, SimSiam. Based on Sec 3, does CLeVER rely heavily on DDCL? I am afraid that the contribution of the proposed regularization is limited to improving DDCL and a few similar work. Moreover, 0.7 improvement on DDCL for one setting is not convincing, it would be more comprehensive if we could see how CLevER improves DDCL across various backbones and scales like for DINO. After all, DINO has been proposed for more than three years, and I also encourage the authors to consider more recent methods. \\n\\n3. Thanks for the attention visualizations, but I am curious about any quantitative results and analysis. From the visualizations, I notice that CLeVER tends to have larger values and wider coverage on the object than DINO, which is great, but it also spills some attention to unrelated pixels, which is more obvious in supplementary visualizations for independent attention heads.\", \"questions\": \"Despite the demonstrated effectiveness of CLeVER regularization, I am majorly concern about its application to general contrastive learning methods. If it cannot be adopted by general methods and is dependent on certain methods and objectives, I am afraid that its contribution to the CL literature is limited. Moreover, the authors do not explain in depth the motivation for such regularization as I still have concerns. Please see the weakness section, and in short, I am majorly concerned about (2), then (1), and the authors may resolve (3) if time permits.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"metareview\": \"This paper proposes an equivariant-based contrastive learning framework to address limitations in invariant contrastive learning (ICL). The authors claim that the absence of augmentation-related information in ICL leads to inefficiencies and reduced robustness. CLeVER introduces a regularization term designed to decouple invariant and equivariant representations while maintaining orthogonality and minimizing the distance between their normed logits. Experiments demonstrate performance gains on baseline models such as DINO across downstream tasks and various backbones.\\n\\nReviewers point out that CLeVER is closely related to DDCL, with the primary difference being the addition of a regularization term. The theoretical motivation behind this term is not fully explained, leaving the contribution incremental. The authors are encouraged to add more explanations and thorough studies and comparisons to better elaborate the contributions.\", \"additional_comments_on_reviewer_discussion\": [\"Several common points were arised among the reviewers:\", \"Novelty and Contribution: Reviewers dxxT, UQk5, and hk6G highlighted that CLeVER's main innovation\\u2014introducing a regularization term\\u2014is incremental and primarily tied to DDCL. The authors did not provide sufficient explanations or evidence on why this regularization is theoretically motivated or effective.\", \"Generalizability: Reviewers questioned whether CLeVER can work with other methods like SimCLR or SimSiam. The authors did not present additional results or clarifications during the rebuttal phase.\", \"Experimental Scope: While CLeVER showed consistent improvements over DDCL, the margins were modest. Reviewers requested more benchmarks against other equivariant contrastive learning methods (e.g., works by Dangovski et al., Xie et al., Gupta et al.), which were missing.\", \"Attention Visualizations: Reviewer dxxT appreciated the visualizations but raised concerns about quantitative results and unintended \\\"spilling\\\" of attention to unrelated areas.\", \"The authors did not provide a rebuttal to address these concerns and no further discussion is raised. Thus, I assume that these concerns do remain.\"]}"
]
} |
E4LAVLXAHW | Black-Box Detection of Language Model Watermarks | [
"Thibaud Gloaguen",
"Nikola Jovanović",
"Robin Staab",
"Martin Vechev"
] | Watermarking has emerged as a promising way to detect LLM-generated text, by augmenting LLM generations with later detectable signals. Recent work has proposed multiple families of watermarking schemes, several of which focus on preserving the LLM distribution. This distribution-preservation property is motivated by the fact that it is a tractable proxy for retaining LLM capabilities, as well as the inherently implied undetectability of the watermark by downstream users. Yet, despite much discourse around undetectability, no prior work has investigated the practical detectability of any of the current watermarking schemes in a realistic black-box setting. In this work we tackle this for the first time, developing rigorous statistical tests to detect the presence, and estimate parameters, of all three popular watermarking scheme families, using only a limited number of black-box queries. We experimentally confirm the effectiveness of our methods on a range of schemes and a diverse set of open-source models. Further, we validate the feasibility of our tests on real-world APIs. Our findings indicate that current watermarking schemes are more detectable than previously believed. | [
"llm",
"watermarking"
] | Accept (Poster) | https://openreview.net/pdf?id=E4LAVLXAHW | https://openreview.net/forum?id=E4LAVLXAHW | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zPHm3NSEiN",
"xToBTvTtJE",
"wRtYRkQSZh",
"vooGb1wZqj",
"v8xiiCpidX",
"sOgSA126Ax",
"qryem5jrGn",
"pCNOufkDDt",
"iPD812JhCw",
"i0eWrgsyuv",
"hoRTqyRh3j",
"hjy90VpLHE",
"h6OsoiWglX",
"cvXIKuHI9i",
"brnqzy9Q6C",
"ZrtgnkjT4p",
"QgcDMYdr9h",
"KtuOigIuO7",
"DYNwoE2DrQ",
"CE76YId6Vf",
"34zqMYGwTo",
"0A0XGc9t09"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment"
],
"note_created": [
1732228016121,
1732555595730,
1732227862840,
1729406948738,
1737524291567,
1732354012437,
1732561230827,
1732227697684,
1730710650809,
1732227430262,
1732227894459,
1732363166478,
1732228047276,
1732227752073,
1730323701686,
1732227451153,
1732555657801,
1730637150110,
1732227918383,
1734582662493,
1732228248646,
1732527162946
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Reviewer_xkZg"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission13958/Reviewer_xkZg"
],
[
"ICLR.cc/2025/Conference/Submission13958/Reviewer_PSUp"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Reviewer_gicP"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Reviewer_PSUp"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Reviewer_Ay59"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Area_Chair_bWoS"
],
[
"ICLR.cc/2025/Conference/Submission13958/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13958/Reviewer_Ay59"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer xkZg\", \"comment\": \"We thank the reviewer for their detailed feedback and are genuinely glad to hear that they greatly appreciate the writing of the paper. Below, we address the concerns raised in questions Q1 to Q5; and note that a revised version of the paper has been uploaded, with updates highlighted in blue. We are happy to clarify further if there are additional questions.\\n\\n**Q1: How much does the focus on three watermark families limit the scope of the proposed tests?**\\\\\\nGood question\\u2014our results suggest that these three families encompass a broad range of watermarks, including some released after the first version of our work (see below).\\n\\nFirst, we note that each test follows the same high-level idea: a specific behavior of a broad watermarking family is exploited to prompt the LLM with two different distributions. While this naturally raises the question of the extension to new paradigms, we make the point that targeting these fundamental properties enables us to detect a large set of currently popular and practical schemes, which either directly fall into one of these families or combine several of these properties.\\n\\nTo substantiate this argument further, in our new revision, we provide three new experiments:\\n- In Appendix A, we analyze the first public large-scale deployment of an LLM watermark, Google Deepmind\\u2019s SynthID-Text, which was open-sourced after our submission. We discuss the differences compared to previous schemes and show that our Red-Green test works perfectly on this scheme, further demonstrating its applicability. \\n- In Appendix A, we consider a variant of DiPMark/$\\\\gamma$R and $\\\\delta$R schemes without the cache, as suggested by Rev. PSUp. Interestingly, these variants now fall into the Red-Green family, and as our new results demonstrate, are detectable by our Red-Green test.\\n- In Appendix F, we consider a variant of the Aaronson watermark [5] (Figure 6), which belongs to both the Red-Green and the Fixed-Sampling families, and show that the Fixed-Sampling test can detect it for small values of its key parameter $\\\\lambda$, including $\\\\lambda=0$ which corresponds to the original Aaronson scheme.\\n\\nAs we discuss in Sec. 7, while completely novel approaches can in principle always exist, our results strongly suggest that our tests are broadly applicable to most currently practical schemes. \\n\\n**Q2: Did the authors explore the possibility of a unified test for all watermarks?**\\\\\\nWhile this could be interesting, we see value specifically in having the tests be separate. In particular, we argue that specificity has the benefit of allowing for more power and more precise results, as we can directly know to which family the tested scheme belongs. The latter can help enable downstream attacks, e.g., the attacks in [1,2] are only applicable to Red-Green schemes. Similarly, knowing the family allows for parameter estimation, which is a necessary step to mount such attacks.\\n\\nWe believe it may be possible to unify the different tests within a single prompt. However, given that the total cost of running our tests is roughly \\\\\\\\$3, we don\\u2019t see the practical benefits of a single unified test for the three tested families. Moreover, a joint test could be more complex, harder to understand and may not be necessarily cheaper. Finally, in case of fundamentally new scheme families, even a joint test that we hypothetically devise now would still need to be updated/revised, as it would not be directly applicable. \\n\\nWe welcome follow-up work that improves the fundamental aspects of detection (power, detection of newer schemes), and believe that our tests can serve as a solid baseline and further provide insight into the key drawbacks of each of the fundamental ideas used in the literature from the perspective of detectability. \\n\\n**Q3: Are the proposed tests robust to variations in the scheme hyperparameters?**\\\\\\nWe have experimentally validated our tests across a wide range of scheme hyperparameters. More specifically, we had shown in Table 3 the results of the tests with the following hyperparameters: \\n- Unwatermarked models with different temperatures.\\n- Red-Green watermarks with different hash functions, $\\\\delta$ and $\\\\gamma$.\\n- Fixed-Sampling watermarks with different length of key.\\n- Cache-Augmented watermarks with different underlying schemes.\\nWe also tested all these parameter combinations across 7 different models. \\n\\nAs the results are consistent across the tested hyperparameters, we believe that these variations in hyperparameters are sufficient to experimentally demonstrate the robustness of our tests. We are happy to explore more variations of parameters if the reviewer has some concrete examples.\"}",
"{\"comment\": \"We thank the reviewer for raising their score to recommend acceptance, and appreciate the comments. We are discussing ideas to tune the writing to make it more cohesive, e.g., by adding a short section before our tests are introduced, which would give more context around detectability and the threat model. If the reviewer has other concrete suggestions, we are happy to take those into account.\"}",
"{\"title\": \"Response to Reviewer PSUp\", \"comment\": \"We thank the reviewer for their detailed and exhaustive feedback. Below, we address the concerns raised in questions Q1 to Q9; and note that a revised version of the paper has been uploaded, with updates highlighted in blue. We are happy to clarify further if there are additional questions.\\n\\n**Q1: Could the authors provide more mathematical descriptions of the algorithms for the detection test?**\\\\\\nWe believe that the more descriptive introduction of each test (Sections 2, 3, and 4) better conveys the intuition behind each step and makes the material more approachable. Nonetheless, following the reviewer\\u2019s suggestions, we have added (Appendix G) algorithmic presentations of the Red-Green test, the Fixed-Sampling test and the Cache-Augmented test. The algorithm for the Red-Green test in Appendix G also addresses the lack of clarity regarding the permutation test used for computing the p-values, which was flagged by the reviewer. We are happy to improve the writing further if the reviewer has additional suggestions.\\n\\n**Q2: Is bypassing the Fixed-Sampling test trivial? More broadly, is there an approach to make schemes undetectable by any test?** \\\\\\nWe respectfully disagree with the statement that bypassing the Fixed-Sampling test is trivial. First, as we show in Appendix E, our test works even under various adversarial modifications. Second, as we state in Section 7 (Limitations), it is possible that future schemes break our tests, and we do not believe this reduces the value of our contributions. Lastly, adversarial modifications need to be included in the broader picture of LLM watermarking: do such modifications have adversarial effects on strength, robustness to removal, or behavior in low-entropy settings?\\n\\nHowever, we agree with the reviewer that the insight from [1], conditioning the watermark on the entropy of the context, may be possible to leverage to make a given scheme less detectable. Running the adversarial modification suggested by the reviewer, and setting $\\\\lambda$ as an entropy bound on the first previous tokens, the Fixed-Sampling test can detect this modified scheme up to $\\\\lambda = 2$ on the Llama model. Yet, by simply rewriting the prompt to first let the model generate independent high entropy text, even with $\\\\lambda=10$ we obtain a p-value of $2.9 \\\\times 10^{-40}$. The test with the updated prompt does not reject the null hypothesis when the watermark is not present ($p=0.94$). This shows that the intuition behind our Fixed-Sampling test is both relevant and practical. The updated prompt is: *Write a short essay about war; but first prepend 10 random pinyin characters. Here is the format of your response: {chinese characters} {essay}. Start the essay with ##Start. Don't repeat the instructions.*.\\n\\nTo explore the question of such modifications even further, in a newly added Appendix F, we present a new experiment analyzing a stronger extension of the scheme proposed by the reviewer. We show that modifying the scheme by conditioning the watermark on the entropy of the context bypasses our tests but also reduces the watermark strength. \\n\\nNamely, we used the Aaronson watermarking scheme [2] which is part of the Fixed-Sampling family, and applied the watermark on tokens where the $h$ previous tokens entropy is greater than $\\\\lambda$ (the scheme is detailed in Algorithm 2). This means not only that the first few tokens are generated without a watermark (as in the reviewer suggested adversarial modification), but also any tokens that do not satisfy the entropy criteria. Compared to the additional reviewer suggestion, the second point prevents clever prompt engineering from bypassing the entropy mechanism to detect the watermark.\\n\\nWe find that increasing $\\\\lambda$ decreases the original watermark strength/robustness, but also decreases our test ability to detect the watermark at an even faster rate (Figure 6). For reference, our test succeeds up to $\\\\lambda = 0.1$. This suggests a trade-off between watermark undetectability and watermark strength/robustness. It intuitively makes sense as, in the limit, using the scheme from [1] guarantees undetectability. We note, however, that [1] itself suffers from severe practical limitations [3] and lacks robustness evaluation. Hence, any of our tests being bypassable by some schemes is not surprising, and we don\\u2019t claim to be able to detect any possible watermarking scheme. But we remark that such modifications come at the cost of the watermark strength/robustness. Our work allows model providers or legal authorities to have realistic expectations regarding the effectiveness and pitfalls of a given watermarking scheme; and enables them to choose a scheme appropriate for their need.\"}",
"{\"summary\": \"This paper presents a significant contribution to the field of LLM watermarking. From the authors' claims, they are the first to provide a comprehensive study of black-box watermark detection. Their findings demonstrate the practical detectability of prominent watermarking schemes, challenging previous assumptions about their undetectability. This paper has provided the foundation for future research on more robust watermarking techniques and advanced detection methods.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper is extremely well-written. Kudos to the authors for taking time to ensure that the paper is concise, clear, and enjoyable enough for anyone to read. The formulations for each statistical test for detectability are clear and well explained. Providing detailed tests for each class of watermarks further strengthened the paper. The results highlight the strength of their approach as watermarks can be detected accurately, more so at a low cost. I also appreciate the fact that they experimented to see if their tests could cross detect other watermarks.\", \"weaknesses\": [\"The methods, while detailed, appear to focus on a strict reverse engineering approach for detecting each specific class of watermark. Did the authors explore the possibility of a unified approach that could detect all classes of watermarks? What are the authors' thoughts on this?\", \"The experiments were limited to just three classes of watermarks. I believe this is okay, and future work could expand the scope to include other types, but it is a weakness for this paper.\", \"The cross-detection tests only applied to watermarks from different classes. However, there were no evaluations on whether the detection is robust to variations in the hyperparameters of the same watermark. Can the detection identify a watermark regardless of the hyperparameters used?\", \"Additionally, the paper lacks details on the efficiency of the detection tests. For instance, how many tokens are required to reliably detect the presence of watermarks using these methods? Addressing this could further minimize costs.\"], \"questions\": \"My questions are outlined in the weaknesses mentioned earlier. Please address those and the following:\\n\\n- In transitioning from an attack-focused approach to a defensive one, do the authors believe that their tests would still be effective in detecting the presence of watermarks in texts that have been adversarially manipulated to remove them, especially in a blackbox scenario?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"The authors already states that the pros of their study outweigh the cons, and I am inclined to side with them.\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"comment\": \"Thank you for taking the time to address my queries, run additional experiments, and update the paper.\\n\\nRegarding Q5, my question was based on a hypothetical scenario drawn from your ethics statement: Suppose an attacker does not have access to the provider's LLM or detector but is aware that a watermark is being used. If the attacker paraphrases the text (and let's assume the paraphrased text can bypass the provider's detector), would your detector still be able to identify the watermark? My reasoning for this question is that, based on the proposed method, it intuitively seems that your approach might be more robust to such corruptions compared to the provider's detectors. However, I could be wrong, which is why I wanted to understand your perspective on this scenario (hence not putting it as a weakness).\\n\\nAs for your other responses, I am thoroughly convinced of the potential of your method. Excellent work, and I will be increasing my scores accordingly. Kudos!\"}",
"{\"comment\": \"I would like to thank the authors for their detailed rebuttal. Most of my concerns have been addressed, and the paper has greatly improved after the revision. I will raise my score.\"}",
"{\"title\": \"Response to Reviewer Ay59\", \"comment\": \"We thank the reviewer for their detailed feedback. Below, we address the concerns raised in questions Q1 to Q4; and note that a revised version of the paper has been uploaded, with updates highlighted in blue. We are happy to clarify further if there are additional questions.\\n\\n**Q1: Can you highlight the practical implications of watermark detection?**\\\\\\nCertainly. The objective behind watermarking an LLM is to enable the detection of whether a given text was generated by a specific LLM. In practice, it should allow both holding a model provider accountable for harmful text generated by its model and holding users accountable for using an LLM in scenarios where its use is inappropriate or forbidden. Being able to detect a watermark behind an LLM deployment provides a malicious user with multiple opportunities. \\n\\nFirst, detection is a common prerequisite for performing spoofing attacks [1, 2, 3, 4], where a malicious user learns the watermark in order to generate arbitrary watermarked text without using the watermarked model. Such attacks can be used to discredit a model provider by generating text that appears to be genuinely watermarked and attributing it to the model provider.\\n\\nSecond, detection is a prerequisite for assisted scrubbing attacks (as in [1, 4]), where a malicious user can more successfully remove the watermark from an LLM generated text compared to blindly rewriting the watermarked texts. Consequently, such malicious users can nullify any positive effects associated with the watermark deployment.\\n\\nLastly, knowing that a particular LLM is watermarked may lead a malicious user to avoid using that LLM entirely and instead favor another LLM that is not known to be watermarked.\\n\\nHence, knowing how detectable schemes are in practice, besides theoretical interest, is also important for model providers or legal authorities to have realistic expectations regarding the effectiveness and pitfalls of a given watermarking scheme. We have added a discussion about the practical implications of watermark detection in the updated version of the paper in a newly added Appendix I referenced from our Introduction.\\n\\n**Q2: Does the need for one test per scheme family limit the applicability of the proposed tests?**\\\\\\nGood question\\u2014we do not believe this is the case.\\n\\nFirst, we note that each test follows the same high-level idea: a specific behavior of a broad watermarking family is exploited to prompt the LLM with two different distributions. If the distributions are highly dissimilar, it suggests that a watermark is present. Otherwise, the model is likely not watermarked. This idea is instantiated to three common paradigms: the key based on the context (Red-Green schemes), the key permutation (Fixed-Sampling) and the presence or absence of a cache (Cache-Augmented). While this naturally raises the question of the ease of extension to new paradigms, we make the point that targeting these fundamental properties enables us to detect a large set of currently popular and practical schemes, which either directly fall into one of these families or combine several of these properties.\\n\\nTo substantiate this argument further, in our new revision, we provide three new experiments:\\n- In Appendix A, we consider a variant of DiPMark/$\\\\gamma$R and $\\\\delta$R schemes without the cache, as suggested by Rev. PSUp. Interestingly, these variants now fall into the Red-Green family, and as our new results demonstrate, are detectable by our Red-Green test.\\n- In Appendix A, we analyze the first public large-scale deployment of an LLM watermark, Google Deepmind\\u2019s SynthID-Text, which was open-sourced after our submission. We discuss the differences compared to previous schemes and show that our Red-Green test works perfectly on this scheme, further demonstrating its applicability. \\n- In Appendix F, we consider a variant of the Aaronson watermark [5] (Figure 6), which belongs to both the Red-Green and the Fixed-Sampling families, and show that the Fixed-Sampling test can detect it for small values of its key parameter $\\\\lambda$, including $\\\\lambda=0$ which corresponds to the original Aaronson scheme.\\n\\nAs we discuss in Sec. 7, while completely novel approaches can in principle always exist, our results strongly suggest that our tests are broadly applicable to most currently practical schemes.\"}",
"{\"summary\": \"The paper shows that it is possible to detect the presence of most existing watermarks using black-box interaction with the model, without knowing the watermarking key.\\nThey also demonstrate that their attack is capable of estimating the parameters used in the watermarking schemes.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"A huge number of watermarking papers have come out recently.\\nMany of them ask whether their watermarks harm generation quality by performing experimental evaluations, but these are inherently limited: There is no way to experimentally guarantee that the watermark will preserve the quality under *every possible* use-case of the model.\\nTherefore, perhaps a more useful test of quality is to simply attempt to detect it. If attacks that are specifically designed to detect the watermark still fail to do so, then this can be seen as unusually strong evidence that it is quality-preserving.\\n\\nThis work shows that existing schemes typically fall short in this respect, demonstrating an important weakness.\", \"weaknesses\": \"It is not surprising that they were able to easily detect the schemes they attacked. Those schemes are not designed to be undetectable.\\nIn the \\\"Limitations\\\" section, they justify the choice to only consider these schemes with the claim that the provably-undetectable schemes \\\"lack experimental validation\\\" and \\\"are not yet practical due to slow generation speed.\\\"\\n\\nHowever, I believe these claims require justification because:\\n- \\\"Excuse me, sir? Your language model is leaking (information)\\\" is a practical implementation of an undetectable scheme. The author doesn't report any issues. This seems to already contradict the above claims.\\n- As I understand it, the generation speed of these techniques (including the one just mentioned) is _no slower_ than it is for any other scheme. They work essentially identically to other schemes, except that they are careful not to embed bias in cases where it might be noticeable without the key.\\n- I think that the reason there are relatively few practical demonstrations of undetectable schemes is just that most people doing experiments don't care about it. If you can get slightly better robustness by dropping undetectability, most experimentalists will go for that. However, since the message of the present paper depends on it _actually being difficult_ to build a practical undetectable scheme, it would be much more compelling if you at least attempt to do so.\", \"here_is_a_simple_undetectable_scheme_that_you_could_try_as_a_benchmark\": \"Use Aaronson's scheme exactly (implemented in many places, e.g. Piet et al.), except that if a $k$-gram has empirical entropy (as defined in Christ et al.) less than $\\\\lambda$, then don't use the Gumbel-max trick and instead just sample without bias according to the model. (Crucially, the first $k$ tokens in any response should be sampled exactly according to the model, without any watermark bias.) Note that this scheme is no slower than any other scheme. Detection with the key is also extremely fast.\\n\\nIt is easy to see that this scheme will require seeing roughly $2^{\\\\lambda/2}$ tokens before it becomes detectable _without_ the key; and it should be detectable _with_ the key as long as the text has (empirical) entropy at least $\\\\lambda$ in most sequences of $k$ consecutive tokens.\\n- If you find that this scheme only becomes practically undetectable once you set $k$ or $\\\\lambda$ to be unreasonably large (such that detection with the key significantly suffers), then I would find the message that existing practical schemes much more compelling.\\n- If you find that this scheme is in fact practically undetectable for reasonable choices of $k$ and $\\\\lambda$, then that would arguably be an even more compelling result (although the message would change slightly).\", \"questions\": \"In Appendix C, you discuss a method for estimating scheme parameters. Are your techniques capable of learning the watermarking key itself?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer gicP\", \"comment\": \"We thank the reviewer for their detailed feedback. Below, we address the concerns raised in questions Q1 to Q3; and note that a revised version of the paper has been uploaded, with updates highlighted in blue. We are happy to clarify further if there are additional questions.\\n\\n**Q1: Can you justify the claim that the scheme from [1] lacks practical validation? Does the same apply to [3]?**\\\\\\nOur claim regarding the inefficiency of [1] originates from Figure 3 in [2] (this citation was unfortunately missing before, and we added it to our Sec. 7). In particular, they demonstrate that using [1] reduces the generation speed of the model approximately 10x compared to the case without the watermark, which is prohibitive to any practical deployment. Their results suggest that the slow generation is caused by the conversion from tokens to a binary vocabulary. It is our understanding that [3] also employs the same conversion, and thus likely experiences the same issues. We did not find a corresponding latency evaluation of [3] that would contradict this. Further, both [1] and [3], amongst other properties also lack any evaluation of the watermark robustness, a property central to most LLM watermarking works. Notably, the authors of [3] acknowledge that their approach is primarily theoretical and that the given practical instantiation only serves as a proof of concept.\\n\\nMore broadly, we greatly appreciate the attempts to construct theoretically-grounded schemes such as [1] and [3], and believe they bring us closer to understanding the difficulty of building a practical undetectable scheme. We look forward to future work on validation of such schemes, but highlight that in the current state it is hard to know their practical value, e.g., if the robustness properties are only slightly or fundamentally different from other schemes. Thus, our goal was to test our method on as many practically demonstrated schemes as possible. This is further extended by our new experiments in App. A (variants of Cache-Augmented schemes, SynthID-Text) and App. F (the reviewer\\u2019s proposed Aaronson variant, discussed below in Q2). We have updated the limitations section to better reflect our position, and are happy to make further changes if the reviewer has concrete suggestions. \\n\\n**Q2: Can you investigate the strength-detectability tradeoff of a variant of the AAR scheme that gets partially disabled based on entropy, inspired by the ideas of [1]?**\\\\\\nWe thank the reviewer for this idea. While we already had included experiments to test the robustness of our test in adversarial settings (Appendix E), coming up with new adversarial schemes based on the idea from [1] indeed strengthens the discussion regarding the state of watermark detectability. In a new experiment, we implement and evaluate the proposed variant, presenting detailed results and a discussion in Appendix F.\\n\\nIn summary, this change remains detectable by our method until $\\\\lambda/k = 0.1$. We also show that the strength of the watermark decreases with $\\\\lambda$. Hence, there is a trade-off between undetectability and watermark strength. While this partial evaluation seems promising, including other important properties of the watermark (e.g., FPR at low TPR, robustness to different entropy settings, dependence on text length, robustness to watermark removal) may reduce the viable range of parameters further, as is generally the case for LLM watermark evaluations. On the other hand, more targeted detection methods may be more effective against this variant. \\n\\nWe included a summary of the consequences of this new finding in our Sec. 7 as a pointer for future work on finding practical undetectable schemes, and we are happy to adapt the message there further. \\n\\n\\n**Q3: Can your parameter estimation techniques be used to learn the private key of the watermark?**\\\\\\nNo\\u2014this is a much harder problem that was explored in prior work. While learning the exact key is effectively not possible for the schemes we consider, learning the full effect of the key has been shown to be possible in some instances. For example, for the Unigram scheme proposed in [4] (a Red-Green scheme with a fixed Red and Green vocabulary independent of the context), [5] proposes a method to almost perfectly recover the Red/Green partition. \\n\\nMore generally, the field of watermark spoofing studies how to generate watermarked text without knowledge of the private key. Such spoofing attacks [6, 7, 8] only need to acquire partial knowledge of the effect of the watermark (for instance, partial knowledge of the Red/Green partition given a context) to be successful. Hence, most attacks on watermarking schemes do not rely on having full knowledge of the private key. However, as they often require some knowledge of the scheme parameters (e.g., context size), they could benefit from our parameter estimation as the first step in a more elaborate exploit of a scheme [6, 8]; chaining of attacks is an interesting future work item.\"}",
"{\"comment\": \"**Q3: Are the explored cache implementations impractical?**\\\\\\nWe first highlight that in [4] the authors do not provide a concrete instantiation of the cache mechanism. Hence, we believe we are first to discuss concretely how a model provider could deploy the cache in practice. We consider two options that we found natural: per-user and global cache.\\n\\nWe strongly disagree with the claim that a per-user cache is not practical. While we agree that model providers have a large number of users (OpenAI claims to have, on average, 200 million users per week), the cost of storing one hash table per user appears negligible compared to the cost of actually running the model (or storing the discussions history, as in the case of ChatGPT). Hence, we do not see any obvious reason why a per-user cache should not be applicable.\\n\\nRegarding the global cache, we argue that for popular models, waiting for the cache to be cleared is not a practical issue. For instance, assuming 200 million users per week and 1,000 tokens generated per user per day, this suggests that roughly 30 billion tokens are generated per day. Because with a cache there is a trade-off between cache size and watermark strength/robustness, we believe a practical instantiation of a cache would be comparatively small (as also hinted at in [4]). Hence, we argue that the cache would have to be cleared frequently enough to allow for feasible detection. \\n\\nWe note that as long as no practical cache instantiation is deployed and disclosed by model providers, it is hard to make any certain statements about the real-world deployment of caches. However, given the above we do believe that our current assumptions are not inherently impractical and actually provide greater detail than some prior work in this area. \\n\\nWe agree that discussing the effects of practical instantiations of the cache is important and can guide model providers. We now included in the updated paper that we are the first work to open the discussion regarding how to instantiate the cache mechanism. If the reviewer has other ideas about cache instantiations, we are happy to consider those.\\n\\n**Q4: Is the cache only a minor component of the schemes? Can the schemes from [4] and [5] be detected without the cache?**\\\\\\nWe agree with the reviewer that the cache can be seen as a modular component added on top of an already existing scheme, as we hinted on in Section 4. \\n\\nAs per the reviewer's suggestion, we added in Appendix A (Table 6) the results of the Red-Green test on both $\\\\delta$-reweight and DiPmark ([4] and [5]) without cache. Both schemes can be detected with the Red-Green test.\\n\\nMore generally, we argue that (to our knowledge) all prior works proposing watermarking schemes with cache could also be instantiated without cache. Indeed, the cache is added to guarantee $b$-shot undetectability (as defined in [4]) and is not in itself a watermark. Yet, because of the cache mechanism, either the Red-Green test or the Fixed-Sampling test could fail to detect a watermark (despite the watermarking scheme behind the cache belonging to one of these families). Hence, the Cache-Augmented test ensures that those schemes can, in fact, be detected.\\n\\n**Q5: Could the authors provide FPR for their results instead of median p-values?**\\\\\\nWe chose median p-values for our main results (Table 1), as it is a common metric provided in the field ([6] and [7]) and provides more robust insights into the distribution of p-values.\\n\\nYet, we do agree with the reviewer that providing FPR at different rejection rates is also important to gain a better understanding of the test's effectiveness. This is why we had already provided, in Table 3 and Table 4, the rejection rates at 1% and 5% for our main experiments (in particular, covering all experiments from Table 1). Those two tables show both the TPR (in the columns where the watermarking scheme agrees with the test) and the FPR (in the other columns).\\n\\n**Q6: Could you provide more detail regarding the estimation of the context size and provide additional results?**\\\\\\nCertainly. We added in Appendix C1 a detailed description of the context size estimation algorithm. Moreover, we show in Figure 4 the distribution of the different logits estimation for different prompts and with an increasing context size. \\n\\nWe are happy to improve the clarity of the estimator and provide additional results if the reviewer has additional suggestions.\"}",
"{\"comment\": \"We thank the reviewer for their quick turnaround time and for raising their score.\\n\\nWe understand Q5 now\\u2014it is quite a different scenario from the one we focus on, but nonetheless interesting. It is hard to make a conclusive statement here, but our intuition is that the way we choose prompts is crucial to our success, and instead having access to a set of ~arbitrary responses of the model might make things much more difficult. There could be ways to adapt our method to be more suitable for this case though.\"}",
"{\"comment\": \"**Q4: Could you discuss the efficiency of the test with respect to the number of tokens?**\\\\\\nWe agree with the reviewer that discussing the cost of the tests is relevant and should be clearly presented in our experimental evaluation.\\n\\nWe had discussed in Appendix D how the power of the Cache-Augmented test and the Fixed-Sampling test scales with the number of queries. Additionally, we had discussed in Figure 2 how many samples should be used for the Red-Green test and how many tokens per query should be used for the Fixed-Sampling test. We then choose our test hyperparameters based on those experiments and show that running all three tests cost around $3.\\n\\nBased on the reviewer feedback, we have added a new table (Table 10) in Appendix G, which more clearly summarizes the number of tokens per test using the hyperparameters of the experiments in Table 1.\\n\\n**Q5: Do the authors believe that their tests would still be effective in detecting the presence of watermarks in texts that have been adversarially manipulated to remove the watermark, especially in a blackbox scenario?**\\\\\\nWe are not entirely certain whether we understand the reviewer's question.\\n\\nIf the reviewer is referring to third-party modifications to remove the watermark (for instance, paraphrasing), since we are directly querying the model provider\\u2019s LLM, this does not affect our method. Indeed, it does not make sense for the watermark provider themselves to try to remove their own watermark. If the reviewer is referring to attempts by the model provider to hide the watermark to increase undetectability, we study such adversarial modifications in Appendix E. We are happy to follow up if our answer does not fit the reviewer's question.\\n\\n[1] \\u201cLarge Language Model Watermark Stealing With Mixed Integer Programming\\u201d, Zhang et al., 2024\\\\\\n[2] \\u201cWatermark stealing in large language models\\u201d, Jovanovic et al., ICML 2024\"}",
"{\"comment\": \"**Q3: Could there be a single test for all watermarks?**\\\\\\nAs discussed in Q2, while they share the same idea on a meta level, our tests are specifically instantiated to three core ideas behind most current schemes. We argue that specificity has the benefit of allowing for more power and more precise results, as we can directly know to which family the tested scheme belongs. The latter can help enable downstream attacks, e.g., the attacks in [1,2] are only applicable to Red-Green schemes. Similarly, knowing the family allows for parameter estimation, which is a necessary step to mount such attacks.\\n\\nWe believe it may be possible to unify the different tests within a single prompt. However, given that the total cost of running our tests is roughly \\\\\\\\$3, we don\\u2019t see the practical benefits of a single unified test for the three tested families. Moreover, a joint test could be more complex, harder to understand and may not be necessarily cheaper. Finally, in case of fundamentally new scheme families, even a joint test that we hypothetically devise now would still need to be updated/revised, as it would not be directly applicable. \\n\\nWe welcome follow-up work that improves the fundamental aspects of detection (power, detection of newer schemes), and believe that our tests can serve as a solid baseline and further provide insight into the key drawbacks of each of the fundamental ideas used in the literature from the perspective of detectability. \\n\\n**Q4: Why do the tests fail on current black box LLMs?**\\\\\\nWhile the results of our tests do not provide any guarantees in that regard, we believe this is because the tested APIs were indeed not watermarked\\u2014thus, we do not see this as a weakness of our work. In our new experiment in App. A we also repeat our Gemini test on the new Gemini 1.5 Flash API, and still find no evidence of a watermark. This matches the public claims of Google DeepMind, which have announced the watermark only on Web and App deployments. Note that in another new experiment in App. A we demonstrate that the same watermark can be detected by our tests when deployed locally.\\n\\n[1] \\u201cLarge Language Model Watermark Stealing With Mixed Integer Programming\\u201d, Zhang et al., 2024\\\\\\n[2] \\u201cWatermark stealing in large language models\\u201d, Jovanovic et al., ICML 2024\\\\\\n[3] \\u201cOn the learnability of watermarks for language models\\u201d, Gu et al., ICLR 2024\\\\\\n[4] \\u201cDe-mark: Watermark Removal in Large Language Models\\u201d, Chen et al., 2024\\\\\\n[5] \\u201cWatermarking of large language models\\u201d, Scott Aaronson, 2023 Workshop on Large Language Models and Transformers, Simons Institute, UC Berkeley\"}",
"{\"summary\": \"The authors introduce statistical tests for detecting three main watermark families under blackbox setting, namely, Red-Green, Fixed-Sampling, and Cache-Augmented watermarks. They confirm the effectiveness of their methods in an extensive experimental evaluation across seven schemes and five open-source models, and execute them on three deployed models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper suggests that current watermarking schemes may be susceptible to detection in the black-box setting and verify it in their experiments.\", \"weaknesses\": \"- This paper lacks a clear mathematical presentation of its algorithms, and the descriptions are often vague.\\n\\n- The detection tasks for Fixed-Sampling and Cache-Augmented watermarks are trivial, and the proposed simple algorithm can be easily defended against.\\n 1. The detection algorithm based on unique outputs is not practical. In real-world applications, one can simply skip the first few tokens to ensure that generated outputs are different, which has been proposed in Algorithm 3 in Christ et al, 2024[1].\\n 2. The detection algorithm focused on cache is not applicable. It could take too much time for the detection to complete in waiting for the cache to be reset in a global cache. While user cache is usually not applicable due to a potentially large number of users.\\n 3. The cache mechanism is only a minor component of these watermarking schemes, and removing it often does not degrade performance, as discussed in Hu et al., 2023[2].\\n\\n\\n- Reporting median p-values over 5 watermarking keys is impractical, as only a single watermarking key is typically used per model in real-world applications.\\n\\nThe median p-value is not a good metric, as it does not reflect the actual false positive rate. It is also difficult to interpret.\\n\\n- As shown in Figure 3, there are large deviations from the actual $\\\\delta$, indicating that the current results may not be suitable for downstream tasks.\\n\\n[1] Christ, Miranda, Sam Gunn, and Or Zamir. \\\"Undetectable watermarks for language models.\\\" The Thirty Seventh Annual Conference on Learning Theory. PMLR, 2024.\\n\\n[2] Hu, Zhengmian, et al. \\\"Unbiased watermark for large language models.\\\" arXiv preprint arXiv:2310.10669 (2023).\", \"questions\": \"1. The key algorithm for calculating the p-value in lines[197-240] is too vague. Could you please clarify it?\\n\\n2. Could you provide a false positive rate for your detection algorithms? Additionally, the false positive rate may increase as we need to test various different types of watermarking schemes.\\n\\n3. Could you provide a detailed algorithm for estimating the context size as described in lines[781-791], along with the corresponding experimental results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"[1] \\u201cUndetectable watermarks for language models\\u201d, Christ et al., COLT 2024\\\\\\n[2] \\u201cPublicly detectable watermarking for language models\\u201d, Fairoze et al., 2024\\\\\\n[3] \\u201cExcuse me, sir? Your language model is leaking (information)\\u201d, Zamir et al., 2024\\\\\\n[4] \\u201cProvable robust watermarking for ai-generated text\\u201d, Zhao et al., ICLR 2024\\\\\\n[5] \\u201cLarge Language Model Watermark Stealing With Mixed Integer Programming\\u201d, Zhang et al., 2024\\\\\\n[6] \\u201cWatermark stealing in large language models\\u201d, Jovanovic et al., ICML 2024\\\\\\n[7] \\u201cOn the learnability of watermarks for language models\\u201d, Gu et al., ICLR 2024\\\\\\n[8] \\u201cDe-mark: Watermark Removal in Large Language Models\\u201d, Chen et al., 2024\"}",
"{\"title\": \"Discussion window ending\", \"comment\": \"We kindly remind the reviewer to let us know if our response addressed their concerns, as the discussion window ends shortly. We are happy to discuss any outstanding points further.\"}",
"{\"summary\": \"This paper proposes a black-box detection method for identifying whether a watermark is embedded in a Large Language Model (LLM). In this paper, the detectability of current watermarking schemes is investigated for the first time in a practical black-box environment. The researchers developed statistical test methods to detect the presence of watermarks and estimate parameters using a limited number of black-box queries for three popular families of watermarking schemes; Red-Green, Fixed-Sampling and Cache-Augmented. Experimental results show that these approaches are effective and cost-efficient across multiple open source models and different settings. The paper also discusses the ethical implications of its work, highlighting the benefits of raising awareness of the ease of detection of watermarking schemes, despite the potential risk of misuse.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper, for the first time, examines the detectability of current watermarking schemes in a practical black-box setting, which is practical in the real detection scenario.\\n2. The method is well written and the method makes sense and is easily understood. Each method has a clear section structure. \\n3. The experimental results in the black-box scenario verify the effectiveness of the method.\", \"weaknesses\": \"1. Although the authors pointed out that their motivation is to study the ability of current watermarks to resist detection, they did not highlight the significance of watermark detection in real scenarios. Providing specific application scenarios of black-box watermark detection can help readers better understand the contribution of black-box watermark detection.\\n\\n2. The results in Table 1 indicate the method in the paper is constrained by the need for distinct detection techniques for various watermarking methods, with poor generalization among them. As more watermarking methods are proposed, this may increase the cost of detecting watermarks.\\n\\n3. Minor concern: Watermark detection results in Table 2 for production-level language models accessed via API are suboptimal and you can not conclude on the presence of a watermark, which brings some concerns to readers about real-world detection.\", \"questions\": \"1.\\tCan you discuss more application scenarios of watermark detection? This question has a great impact on the contribution of the paper.\\n\\n2.\\tCan you discuss potential commonalities between their detection techniques for different watermarking families? The universality of watermark detection technology is beneficial to reducing detection costs.\\n\\n3.\\tCan the authors discuss more about why current detection methods are unable to determine the watermarking method of real-world production-level LLMs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"**Q7: Why is the Red-Green test reported over 5 watermarking keys when, in practice, only one key is used?**\\\\\\nWe believe this is a misunderstanding. For the results presented in Table 1, the key is fixed while conducting the test. The key is simply changed between different independent repetitions of the test. The goal is to ensure that the test works no matter which private key is used by the watermark. We updated the paper to clarify this point.\\n\\nFurther, in Appendix A, we test the case of a multiple-key Red-Green watermark (as similarly proposed in [10]), which corresponds to the case where, for each token, the key used for watermarking is randomly chosen from a fixed set of keys. However, this is an orthogonal experiment and unrelated to the 5 watermarking keys from Table 1.\\n\\n**Q8: Is $\\\\delta$-estimation suitable for downstream tasks despite large errors?**\\\\\\nThe main focus of our work is the detection of the watermark and demonstrating that estimating parameters at a relatively low cost is possible; however, as the reviewer notes, the estimator for $\\\\delta$ may not be very accurate.\\n\\nRegarding downstream tasks, to our knowledge, there is no work that requires the knowledge of $\\\\delta$ yet, so we cannot determine if the presented estimator is accurate enough. However, some prior works ([8,9]) required knowledge of both which watermark is present and its context size. In that case, because we experimentally achieve 100% accuracy in context size estimation (due to its discrete nature), we strongly believe that it is suitable for such downstream tasks.\\n\\n**Q9: Would your p-values drastically increase if you run several detection tests in sequence?** \\\\\\nAs each test is performed independently (new samples are generated each time), this is an instance of the multiple testing problem.\\n\\nIn our paper, the reported rates and p-values are presented without any multiple testing correction. Multiple testing is a field of research in its own right, and there are multiple strategies to aggregate p-values. The challenge of multiple adjustments is in defining the family of hypotheses. One scenario could be that a malicious user wants to detect only Red-Green watermarks. Another could be that a malicious user performs our three tests along with other tests of their own. In both cases, the family of hypotheses is different, and so is the way to adjust for multiple testing. Hence we report our p-values and rejection rates without accounting for multiple testing.\\n\\nTo avoid any confusions, we updated the paper to clarify that the reported rejection rates do not account for multiple testing. \\n\\n\\n[1] \\u201cUndetectable watermarks for language models\\u201d, Christ et al., COLT 2024\\\\\\n[2] \\u201cWatermarking of large language models\\u201d, Scott Aaronson, 2023 Workshop on Large Language Models and Transformers, Simons Institute, UC Berkeley\\\\\\n[3] \\u201cPublicly detectable watermarking for language models\\u201d, Fairoze et al., 2024\\\\\\n[4] \\u201cUnbiased watermark for large language models\\u201d, Hu et al., ICLR 2024\\\\\\n[5] \\u201cDipmark: A stealthy, efficient and resilient watermark for large language models\\u201d, Wu et al., ICML 2024\\\\\\n[6] \\u201cRobust distortion-free watermarks for language models\\u201d, Kuditipudi et al., TMLR 05/2024\\\\\\n[7] \\u201cOn the learnability of watermarks for language models\\u201d, Gu et al., ICLR 2024\\\\\\n[8] \\u201cLarge Language Model Watermark Stealing With Mixed Integer Programming\\u201d, Zhang et al., 2024\\\\\\n[9] \\u201cWatermark stealing in large language models\\u201d, Jovanovic et al., ICML 2024\\\\\\n[10] \\u201cA watermark for large language models\\u201d, Kirchenbauer et al., ICML 2023\"}",
"{\"metareview\": \"Summary: This paper studies the problem of detecting LLM watermarks in a black-box way, without even knowing the watermark key. Extensive experiments across three families of LLM watermarks, Red-Green, Fixed-Sampling and Cache-Augmented, verify the effectiveness of the proposed method.\", \"strengths\": \"1. This paper is the first work that detects LLM watermarks in a black-box way. It suggests that current watermarks are susceptible to detection in the black-box setting.\\n2. The paper is well-written and the experiments are rich.\", \"weaknesses\": \"1. Reviewers have concerns on the claim that the provably-undetectable schemes \\\"lack experimental validation\\\" and \\\"are not yet practical due to slow generation speed.\\\"\\n2. Reviewers have concerns on the generalizability of the proposed detectors, as they are mostly based on reverse engineering. The experiments were limited to just three classes of watermarks.\\n\\nAll reviewers consistently vote for acceptance, two of who champion the paper with a score of 8. There is no doubt that the paper is above the acceptance bar of ICLR.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers consistently vote for acceptance, two of who champion the paper with a score of 8. There is no doubt that the paper is above the acceptance bar of ICLR.\"}",
"{\"title\": \"General Response\", \"comment\": \"We thank the reviewers for their feedback and evaluation of our work. We are pleased to see that they believe our contributions fill an important gap in LLM research (gicP, xkZg), and serve as a strong foundation for future studies in the field (xkZg). We are also glad they appreciate the extensiveness of our experimental evaluation (Ay59, PSUp, xkZg). We have uploaded an updated version of the paper (new content marked blue) and replied to all reviewers\\u2019 questions in individual comments below. We are happy to engage in follow-up discussions.\"}",
"{\"title\": \"Raise my rating\", \"comment\": \"As mentioned above, the black box detection techniques of various watermarking methods are discussed, but these are not systematically integrated, resulting in a looser overall structure and a reading experience that is closer to a blog or technical report than an academic paper. Nonetheless, the paper performs well in terms of experimentation and is well documented, so I would like to raise my score.\"}"
]
} |
E4Fk3YuG56 | Cut Your Losses in Large-Vocabulary Language Models | [
"Erik Wijmans",
"Brody Huval",
"Alexander Hertzberg",
"Vladlen Koltun",
"Philipp Kraehenbuehl"
] | As language models grow ever larger, so do their vocabularies.
This has shifted the memory footprint of LLMs during training disproportionately to one single layer: the cross-entropy in the loss computation.
Cross-entropy builds up a logit matrix with entries for each pair of input tokens and vocabulary items and, for small models, consumes an order of magnitude more memory than the rest of the LLM combined.
We propose Cut Cross-Entropy (CCE), a method that computes the cross-entropy loss without materializing the logits for all tokens into global memory.
Rather, CCE only computes the logit for the correct token and evaluates the log-sum-exp over all logits on the fly.
We implement a custom kernel that performs the matrix multiplications and the log-sum-exp reduction over the vocabulary in flash memory, making global memory consumption for the cross-entropy computation negligible. This has a dramatic effect. Taking the Gemma 2 (2B) model as an example, CCE reduces the memory footprint of the loss computation from 24 GB to 1 MB, and the total training-time memory consumption of the classifier head from 28 GB to 1 GB.
To improve the throughput of CCE, we leverage the inherent sparsity of softmax and propose to skip elements of the gradient computation that have a negligible (i.e. below numerical precision) contribution to the gradient.
Experiments demonstrate that the dramatic reduction in memory consumption is accomplished without sacrificing training speed or convergence. | [
"large language model",
"large vocabulary",
"efficient"
] | Accept (Oral) | https://openreview.net/pdf?id=E4Fk3YuG56 | https://openreview.net/forum?id=E4Fk3YuG56 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"tQmemldjqm",
"oCZFzYXQLM",
"kj3ypEbvkn",
"kMKfKLeprn",
"j11slTaSPW",
"iJeAtZ0mvs",
"hRqTGWmZgJ",
"hJIYTO7Sho",
"fnHAh0sgPV",
"dDy1U6GkJ5",
"ZyVIyyUrRV",
"WcdxiRHhvI",
"HIJ5FwyJ4E",
"Ffu0gcjBh8",
"DnPlpF9hiq"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review"
],
"note_created": [
1730510335098,
1732503816741,
1732503772622,
1732503495178,
1737523448328,
1730499544701,
1730379271079,
1730740638790,
1732503465092,
1732514992020,
1732503692321,
1732503903025,
1732543342788,
1733098344553,
1734353100589
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission1344/Reviewer_2Zay"
],
[
"ICLR.cc/2025/Conference/Submission1344/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1344/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1344/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission1344/Reviewer_dN8N"
],
[
"ICLR.cc/2025/Conference/Submission1344/Reviewer_HR95"
],
[
"ICLR.cc/2025/Conference/Submission1344/Reviewer_XSRM"
],
[
"ICLR.cc/2025/Conference/Submission1344/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1344/Reviewer_dN8N"
],
[
"ICLR.cc/2025/Conference/Submission1344/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1344/Authors"
],
[
"ICLR.cc/2025/Conference/Submission1344/Reviewer_HR95"
],
[
"ICLR.cc/2025/Conference/Submission1344/Reviewer_XSRM"
],
[
"ICLR.cc/2025/Conference/Submission1344/Area_Chair_KC3K"
]
],
"structured_content_str": [
"{\"summary\": [\"The paper proposes a novel method of skipping most of the unneeded computation inside LM heads during training when using cross-entropy loss. It's key contributions are:\", \"A memory efficient indexed matrix multiplication method, which employs sparsity to accelerate the computation.\", \"A memory efficient linear-log-sum-exp method, which employs dynamic chunking to reduce memory requirements.\", \"Gradient filtering, which further improves sparsity of the gradient computation.\", \"Vocabulary sorting, which allows entire chunks to be skipped during computation.\", \"The method is evaulated on both speed, memory usage and convergence, where it shows massive improvements in memory usage for computing losses compared to alternative methods, marginal improvements in speed and negligible degradation in convergence and training quality.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Well motivated problem. Reducing the memory footprint of LLMs during training is important.\", \"Method generalizes beyond transformer LLMs.\", \"Demonstrates convergence guarantees compared to cross entropy.\", \"Extensive benchmark results.\"], \"weaknesses\": [\"Preliminaries (section 3) does not adequately prepare the reader for the complexity of the notation in section 4.\", \"Section 4 is particularly hard to understand if the reader does not have a deep understanding of GPU kernels and the architecture of modern LLMs.\", \"There is a lack of key insights, CCE seems like an arbitrary monolithic algorithm that came out of nowhere.\", \"Prehaps decoupling the theoretical reasoning from the actual GPU implementation could make the explanation clearer. For example, in line 201, it says \\\"section 4.2 describes how to compute the [...] operation efficiently\\\", but it is initially unclear to the reader why that operation might be efficient unless the reader can fully understand the intricacies of creating an efficient GPU kernel as described in section 4.2. Same goes for section 4.1 and 4.3.\", \"Otherwise, starting from an already efficient GPU implementation of standard CE and focusing on the steps needed to modify it into the CCE method could further improve readability and clarity.\", \"A lack of ablation studies for the extensive modifications brought on by CCE\", \"Section 4.1, 4.2 and 4.3 makes a large number of significant assumptions, modifications and improvements to the traditional CE algorithm, it is not clear whether each modification is actually necessary or which are the most important ones.\", \"Unclear whether CCE's improvements is GPU dependent or not. Would it work in non-parallel cases such as on a single-threaded CPU?\"], \"questions\": [\"What are the theoretical justifications on why CCE might be much more computationally and memory efficient compared to traditional CE? For example, why can't you apply the same chunking strategies used in CCE for traditional CE?\", \"What are the key insights that make CCE work? It seems to me currently that all of the contributions are mixed together, where CCE is an all or nothing monolithic algorithm.\", \"How does CCE compare to CE on the CPU, or non-parallel cases? Are the improvements algorithmic or does it need to take advantage of GPU parallelization strategies?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to dN8N (2/2)\", \"comment\": \"> In Section 4.3, the Gradient filtering paragraph, \\\"If stored in bfloat16 with a 7-bit fraction, any value below 2^{-12} will likely be ignored due to truncation in the summation or rounding in the normalization.\\\" Can you explain this in detail? Providing a brief explanation of the numerical precision issues in bfloat16 and how they relate to the gradient filtering threshold is appreciated.\", \"adding_two_numbers_in_floating_point_follows_the_following_logic\": \"Let\\u2019s assume we are adding two numbers, a and b, such that b is bigger than a, then\\n\\nStep 1. Separate the mantissa (the fractional part) and the exponent\\nStep 2. Re-write the mantissa of the smaller number (a in our case) such that it shares the same exponent as the larger number\\nStep 3. Add the mantissas of a and b\\nStep 4. Convert the resulting mantissa and exponent into normalized form.\\n\\nStep 2 is where truncation happens and the intuition of gradient filtering comes from. In bfloat16, if the exponent of b is more than 2^7 times larger than that of a, the 7-bit mantissa no longer has enough precision to represent any of a using the exponent of b. For gradient filtering, we are only concerned with values in the range [0, 1], so the threshold of 2^{-12} means that we only keep values that don\\u2019t get rounded to zero when b = 2^{-5}.\\n\\nWe have added this to the Appendix.\\n\\nLet us know if we have addressed your concerns. \\n\\nFinally, if you believe we have, could you please consider raising your score to reflect that?\"}",
"{\"title\": \"Response to dN8N (1/2)\", \"comment\": \"We thank the reviewer for their review. We are please they found our problem \\u201cwell-motivated\\u201d, our solution \\u201cclear and easy to understand\\u201d, and our writing \\u201cvery clear and easy to follow.\\u201d\\n\\n> Can you provide details about how the 89% number is calculated and include a brief calculation or breakdown of the memory usage in the paper or appendix?\\n\\nWe provide the raw numbers we use in this calculation in the appendix, Table A2 (in the initial version, Table A4 in the updated version). 89% for Gemma2 comes from logits / (logits + activations) = 64000 MB / (64000 MB + 7488 MB) = 89.5%. This ignores the memory used by Weights+Opt+Grad as the amount of memory used by that is unaffected by the number of tokens and depends on the exact details of the sharding strategy and number of GPUs.\\n\\nWe have updated the appendix to provide more detail. To summarize here, we use a simplified model and compute the memory usage as follows:\", \"logits\": \"NumTokens * VocabularySize * BytesPerFP32\", \"activations\": \"NumTokens * HiddenSize * NumLayers * BytesPerBF16\\n\\nThis assumes activation checkpointing after every transformer layer and bfloat16. We assume a global batch size of 65536 tokens (a realistic number of 16 GPUs).\\n\\n> Memory usage without activation/gradient checkpointing\\n\\nWithout activation/gradient checkpoint, the memory used by intermediate activations would dominate all other sources of memory usage, for example 80+% for Llama 3 (8B). In this case CCE would reduce memory use by 10%. However, training without activation checkpointing is practically infeasible.\\n\\n> I think most of the analysis in this paper is based on the assumption that gradient checkpointing = True\\n\\nWe depend on activation/gradient checkpointing only when contextualizing the memory consumption of logits relative to the other parts of model training. Other analysis, e.g. Table 1, does not depend on activation/gradient checkpointing.\\n\\n> How does CCE perform without gradient checkpointing?\\n\\nCCE is complimentary to activation/gradient checkpoint and does not depend on it. Without activation/gradient checkpointing, CCE would still continue to perform the same and save the same absolute amount of memory, although amount relative to the total memory footprint would decrease substantially due to intermediate activations dominating.\\n\\n> can you explain where 1477MB, 8000MB, 4000MB, and 24000MB come from? If I understand correctly, the logits.shape is (8192, 256000) in float32, which should take 8000MB memory in total.\\n\\nCertainly. These numbers come from profiling methods to compute linear-cross-entropy loss in PyTorch and, unlike the model memory footprint numbers, are not calculated. We profiled to show real-world memory usage that accounts for all the realities of buffer re-use, allocator requirements, temporary buffers that are specific to the requirements of that implementation, etc.\\n\\n24,000 MB comes from using PyTorch to compute linear-cross-entropy loss. The memory ends up being much higher than just the 8,000 MB to store the logits in float32. In addition to the fp32 logits, we also need the logits in bfloat16 (the result of $C^\\\\top E$, which is performed in bf16), the logits in bf16 after the softcap is applied (Gemma2 specific), and the log-probabilities in float32. These 4 buffers alone account for 24,000 MB.\\n\\n4,000 MB comes from using torch.compile to optimize computation. Exactly how torch.compile is able to save this memory is quite opaque. We suspect that it saves memory by fusing kernels, aggressive buffer re-use, and reducing the number of temporary buffers.\\n\\n8,000MB comes from using Torch Tune (Torch Tune Team, 2024) with 8 chunks. This performs the computation in chunks and therefore reduces the peak memory utilization as its able to re-use memory.\\n\\n1,477MB comes from using Liger Kernels (Hsu et al., 2024). This method makes even heavier use of chunking and adds in custom CUDA kernels to reduce the number of intermediary buffers.\"}",
"{\"title\": \"Response to XSRM (2/2)\", \"comment\": \"> I think there may be mistakes in the backward pass equations at the bottom of page 5 (lines 266-269)\\u2026\\n\\nThank you for catching this, we have updated the paper to correct for this dimension mismatch.\\n\\n> Can you explain this paragraph in more detail please: \\\"We implement the second matrix multiplication in the main memory of the GPU, as a blockwise implementation would require storing or synchronizing S\\u2026\\u201d? Here, is \\\"main memory\\\" HBM?\\n\\nFirst, let us clarify that the terms HBM and main memory of often used interchangeably and refer to the same thing here. We use main memory as HBM refers to a specific memory technology that used in AI-focused GPUs (e.g., A100 and H100), but other technologies (e.g., GDDR6 and GDDR6x) may also fulfill the role of main memory.\\n\\nNow, on to the paragraph in question. In any matrix multiplication, there are the two outer dimensions and the inner dimensions that is reduced over. In a canonical GPU matrix multiplication kernel, the reduction along the inner dimension is performed in SRAM (like the $D$ dimension in Algorithm 2, L279). This memory is extremely fast, but local to a specific warp or block.\\n\\nFor the $\\\\nabla E^\\\\top = S C$ and $\\\\nabla C^\\\\top = S^\\\\top E$ matrix multiplications, the reduction of the inner dimension, $V$ and $N$, respectively, is performed in GPU main memory.\\n\\nWe do this because of the re-computation of the logits, $C^\\\\top E$. To compute the gradient, we must first recompute the logits and then use them to compute S. Here we follow the canonical GPU matrix kernel and perform the reduction along the $D$ dimension in SRAM. Turning to the two remaining matrix multiplications, the full matrix S has been block-divided amongst all the different CUDA blocks and thus no single block has all the values needed to reduce the inner dimension in SRAM. From here, there are two options: 1) synchronization and storage of S in main memory (which will eliminate our memory savings) or 2) perform the reduction in main memory (which has a performance cost due to the relatively slower memory). We choose the latter and developed gradient filtering to offset the performance cost.\\n\\n> Can you add a discussion around sequence parallelism approaches, which can also reduce logit memory per GPU by splitting logits along sequence dimensions\\n\\nGreat idea! Done in updated PDF.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}",
"{\"summary\": \"This paper proposes Cut Cross-Entropy(CCE) to reduce the memory consumption of Classifier Head and CrossEntropy Loss. They find that the vocabulary of LLMs continuously to grows, and under the gradient checkpointing setting, these part takes more than 50% of the memory consumption. CCE reduce the memory overhead by fusing the classifier head and the calculation of cross entropy loss into 1 kernel, and not materializing the intermediate logits in the forward process. In the backward pass they re-compute the intermediate values to avoid this additional memory overhead (which is quite similar to FlashAttention's design). They further propose to leverage the sparsity pattern in the gradient of classifier head to reduce the amount of computation. CCE reduce the memory overhead by 20x for \\\"Loss+Gradient\\\" part and their loss curve matches the BF16 training baseline.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The problem this paper trying to solve is well-motivated.\\n2. The solution to avoid the materialization of the large logit tensor is clear and easy to understand\\n3. The CCE component is easy to deploy in realistic setting.\\n4. The performance do not degrade (since the algorithm is nearly lossless considering the high sparsity level)\\n5. The paper writing is very clear and easy to follow (represent C, E, and LSE in different colors)\", \"weaknesses\": \"1. \\\"The probabilities materialized by the cross-entropy layer account for 89% of the memory consumption of Gemma 2 for single sequence x with length N = 80000\\\". Can you provide details about how the 89% number is calculated and include a brief calculation or breakdown of the memory usage in the paper or appendix?\\n2. Does this assumption still hold true when gradient checkpointing = False? I think most of the analysis in this paper is based on the assumption that gradient checkpointing = True. Include a subsection to discuss or analysis of how your method performs when gradient checkpointing is disabled is appreciated.\\n3. Similar to 2, In Table 1, can you explain where 1477MB, 8000MB, 4000MB, and 24000MB come from? If I understand correctly, the logits.shape is (8192, 256000) in float32, which should take 8000MB memory in total.\\n4. In Section 4.3, the Gradient filtering paragraph, \\\"If stored in bfloat16 with a 7-bit fraction, any value below 2^{-12} will likely be ignored due to truncation in the summation or rounding in the normalization.\\\" Can you explain this in detail? Providing a brief explanation of the numerical precision issues in bfloat16 and how they relate to the gradient filtering threshold is appreciated.\", \"others\": \"What LSE stands for (Log-Sum-Exp) should be defined when it is on its first use.\", \"questions\": \"Please see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This work proposes Cut Cross-Entropy (CCE) to address the massive memory footprint of the standard cross-entropy loss calculation in LLM training. CCE tiles and fuses the logits indexing and the matmul between tiles of `lm_head` and embeddings in the forward pass. In backward propagation, CCE introduces gradient filtering and vocabulary sorting to optimize memory access pattern with negligible approximation errors. CCE presents experiments showing the effectively reduced memory footprint and indistinguishable influence on fine-tuning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"This paper identifies a new challenge brought by the large size of vocabulary of language models, especially LLMs, i.e., the massive memory footprint consumed by the cross-entropy loss computation.\", \"The tricks of gradient filtering and vocabulary sorting in the proposed method are enlightening.\", \"The author implemented CUDA kernels to support the algorithm and provides experiments to verify the reduced memory footprint and latency.\"], \"weaknesses\": [\"The symbols in the derivation of CCE can be clearer. For example, the symbols in page 4 between line 186 and 215, such as $C^T$, $C_{x_i}^T$, $C_X$, and $(C^T E)_X$, look confusing at first glance. It may be helpful to have a table of symbol definition in the appendix.\", \"Experiments on how the memory and latency of CCE kernel varies with the vocab size & model family can be added.\", \"Current Tab 1 presents the memory and latency results of Gemma-2-2B. The vocab size of Gemma-2-2B is 256000, which is larger than other LLMs. For example, the vocab size is 128256 for Llama-3-8B/70B/405B, 32768 for mistral-7B-v0.3, and 32064 for Phi-3.5. If the size of `lm_head` is `(model_hidden_size, vocab_size)`. When the `model_hidden_size` increases, do we expect a diminishing benefit of CCE? The evaluation will be more comprehensive if the author could discuss:\", \"Compared to the baselines, how the memory and latency change if CCE is applied to Gemma-2-27B training (same vocab size as Tab 1, but larger model hidden size)\", \"Compared to the baselines, how the memory and latency change if CCE is applied to training of the models like Phi3.5-mini (smaller vocab size, similar model size).\"], \"questions\": \"- Current Fig. 4 verifies that CCE has negligible influence on LLM fine-tuning. I am also curious about the impact of CCE on LLM pretraining. If an LLM is trained from scratch using CCE, how will the randomly initialized weights influence the gradient filtering and vocab sorting?\\n\\nFor other questions, please refer to my concerns in the Weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes \\\"Cut cross-entropy\\\" (CCE), a method that reduces the memory footprint of computing the cross-entropy loss during LM training dramatically, by never materializing the full matrix of logits/probabilities (which can be huge: batch * sequence_length * vocab_size). To accomplish this, it realizes that the cross-entropy loss can be broken down into two components: (1) the logit for the correct next token, and (2) the log-sum-exp (log of softmax denominator) --- both of these terms are scalars, and can be computed without materializing the full logit tensor. In particular, (1) is computed via simple vector dot-products (Algorithm 1), while (2) can be computed by accumulating partial sums of the log-sum-exp, without every materializing all elements of the sum at once (Algorithm 2). For the backward pass (Algorithm 3), the paper proposes two methods --- gradient filtering and vocabulary sorting --- that reduce the backward pass time by skipping gradient computations for blocks of the softmax matrix where all values are < 2^(-12).\\n\\nPutting all these pieces together, CCE is able to match the speed and quality of existing implementations of the cross-entropy loss, while only requiring a very small percentage of HBM memory (e.g., 1 MB instead of 1GB->24 GB).\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"Reducing the memory requirement for computing the CE loss in LLMs is a strong contribution, especially as the vocabulary sizes, batch sizes, and sequence lengths of LLMs continue to grow. This custom kernel could save many people lots of time trying to get around OOM errors during training, and make it easier to train models with larger sequence lengths/batch sizes/vocab sizes.\", \"The algorithm is clever and elegant, taking inspiration from FlashAttention, which avoids materializing full attention score matrix during attention computation.\", \"The experiments demonstrate that CCE can reduce training memory requirements without impacting quality/convergence during training, or training speed, relative to strong baselines (e.g., torch.compile).\"], \"weaknesses\": [\"I think the section about the backward pass could be explained more clearly (see my questions below for points of confusion that could be clarified).\", \"I think there could have been additional experiments to explore how CCE performs relative to baselines as different hyperparameters vary (e.g., relative size of vocabulary vs sequence length vs. hidden dim, sparsity of S, etc.).\"], \"questions\": [\"Are there regimes where CCE is meaningfully slower than the torch.compile method?\", \"There were a few elements of the backward computation that I think could be explained more clearly:\", \"What is the \\\"v\\\" index in the lines 339-341 (Algorithm 3)?\", \"Why doesn't recomputing the large $C^T E$ matrix multiplication in the backward pass (Algorithm 3) lead to slow-downs? If I understand correctly, this is because although much extra time is spent on this recomputation, less time is spent on the subsequent matrix multiplications, due to the gradient filtering/vocab sorting? Can you break down more granularly how much time each component of CCE (especially the backward pass) takes, and compare this to the naive implementations, so that it becomes clear what is happening here?\", \"Can you explain this paragraph in more detail please: \\\"We implement the second matrix multiplication in the main memory of the GPU, as a blockwise implementation would require storing or synchronizing S...\\\"? Here, is \\\"main memory\\\" HBM?\", \"I think there may be mistakes in the backward pass equations at the bottom of page 5 (lines 266-269). Letting V be vocab size, L be sequence length, and D be hidden dimension, we can see that C is [D,V], E is [D,L], and S is [V,L]. Then for the matrix shapes to be correct, shouldn't it be:\", \"$d/dE = C (S \\\\cdot LSE_{grad})$ --- which is a [D,V] * [V,L] multiplication, which gives [D,L], which is the correct shape of E,\", \"$d/dC = E (S \\\\cdot LSE_{grad})^T$ --- which is a [D,L] * [L,V] multiplication, which gives [D,V], which is the correct shape of C.\", \"Can you include, at least in the appendix, a version of algorithm 3 that also includes the backward pass of the indexed matrix multiplication?\", \"NIT: I think it could be clearer to update the notation to be something like the following, to make Algorithms 2 and 3 easier to follow. For example, you could use $V$, $L$, $D$ to denote vocab size, sequence length, and hidden dim, and correspondingly $V_B$, $L_B$, and $D_B$ to denote the dimensions of the blocks, and $B_V = V/V_B$, $B_L = L/L_B$, $B_D = D/D_B$ to denote the number of blocks, and v, l, d to index into these blocks.\", \"Can you add a discussion around sequence parallelism approaches, which can also reduce logit memory per GPU by splitting logits along sequence dimensions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to XSRM (1/2)\", \"comment\": \"We thank the reviewer for their review. We are pleased that they found our algorithm \\u201cclever and elegant\\u201d, that reducing the memory requirement for CE loss is a \\u201cstrong contribution\\u201d, and that our work \\u201ccould save many people lots of time trying to get around OOM errors.\\u201d\\n\\n> I think there could have been additional experiments to explore how CCE performs relative to baselines as different hyperparameters vary (e.g., relative size of vocabulary vs sequence length vs. hidden dim, sparsity of S, etc.).\\n\\nThank you for the suggestion. Since submission we have run additional experiments and included them in the updated appendix. These experiments alter the vocabulary size, hidden dimension, and sparsity of S. The sparsity of S is largely determined by the vocabulary size since the number of non-trivial values is largely constant (it is governed by the data type) and thus models with smaller vocabularies have less sparse S matrices.\\n\\nThe trend is that when the model has a high ratio of vocabulary size to hidden dim (e.g., Gemma 2 (2B) where the ration is 111), CCE is faster than torch.compile. When the model has a low ratio (e.g. Phi 3.5 where the ratio is 10), CCE is slower than torch.compile, but continues to save a considerable amount of memory.\\n\\nWe have also added benchmarking with less tokens. CCE exhibits very similar behavior as Baseline and torch.compile \\u2014 as there are less tokens, it gets faster. Further, because CCE does not utilize chunking, it does not reach a plateaus as performance becomes bound by kernel launch time, not computation time.\\n\\n> Are there regimes where CCE is meaningfully slower than the torch.compile method?\\n\\nSimply put, yes. When the ratio of vocabulary size to hidden sizes becomes small, CCE can be meaningfully slower than torch.compile. \\n\\nIn experiments fine-tuning Phi 3.5 Mini (where CCE has the worst relative performance to torch.compile), CCE only increases total training time by 1-2%. However, in our new experiments pre-training Phi 3.5 Mini, CCE increased total training time by 25% as gradient filtering is able to filter out significantly less blocks in this regime.\\n\\n> Why doesn't recomputing the large $C^T E$ matrix multiplication in the backward pass (Algorithm 3) lead to slow-downs?\\n\\nRe-computing $C^\\\\top E$ does lead to slowdowns and CCE would be faster if it didn\\u2019t need to re-compute this. We are able to offset this by the amount of time saved elsewhere.\\n\\n> Can you break down more granularly how much time each component of CCE (especially the backward pass) takes, and compare this to the naive implementations, so that it becomes clear what is happening here?\\n\\nWe have broken down the time spent for CCE and the Baseline implementation in their backward passes for Gemma 2 (2B) and updated the Appendix (see C.2). To summarize here:\\n\\nCCE spends considerably less time on the cross-entropy loss and softcap portions of the gradient computation. For Baseline, these are very memory intensive operations (there is relatively very little computation done). For CCE, the logits are already in SRAM and we do not write the result of this computation to main memory, saving a significant amount of time.\\n\\nCoincidentally, CCE spends a very similar amount of time computing the gradient wrt. the embeddings while CCE spends less time computing the gradient wrt. the classifier. This is because the axis we reduce along for the classifier, N, is shorter than the axis for the embeddings, |V|, and thus leads to less contention on global memory.\\n\\nCompared to Baseline, CCE saves 30 ms on the gradient of the logits wrt. cross-entropy loss, 12 ms on the gradient wrt. softcapping, 5 ms on the gradient wrt. E, and 15 ms on the gradient wrt. C. This saving of 62 ms offsets the time spent re-computing and applying the gradient filter.\\n\\nUnfortunately the implementations using torch.compile are a black-box to us as any attempt to inject profiling or disable parts of the computation alters the computation graph therefore torch.compile\\u2019s ability to fuse kernels. \\n\\n> Can you include, at least in the appendix, a version of algorithm 3 that also includes the backward pass of the indexed matrix multiplication?\\n\\nAdded as algorithm 4. Let us know if you have any suggestions to make it clearer.\\n\\n> think it could be clearer to update the notation to be something like the following, to make Algorithms 2 and 3 easier to follow. For example, \\u2026\\n\\nThank you for the suggestion. We have switched from using M to V to denote indexing and blocking along the vocabulary dimension. We chose to continue to use N to denote indexing along the batch/input dimension as L is often used to represent the length of sequences (like the input to self-attention) and we did not want to cause any possible confusion as to if CCE has temporal dependencies (it does not).\"}",
"{\"title\": \"Increase the score to 8\", \"comment\": \"Thank the authors for their explanation, and I raise my score accordingly.\"}",
"{\"title\": \"Reviewer 2Zay\", \"comment\": \"We thank the reviewer for their review. We are pleased that the found our problem \\u201cwell motivated\\u201d, that \\u201creducing the memory footprint of LLMs during training is important\\u201d, and our benchmark results \\u201cextensive\\u201d.\\n\\n> Section 4.1, 4.2 and 4.3 makes a large number of significant assumptions, modifications and improvements to the traditional CE algorithm, it is not clear whether each modification is actually necessary or which are the most important ones.\\n\\nWe are unsure what the reviewer is referring to here, and would love a chance to clear up any misunderstanding in Sections 4.1-4.3.\", \"cce_has_three_key_differences_from_a_traditional_ce_algorithm\": \"Fusion of matrix-multiplication + cross-entropy loss into a single kernel, gradient filtering, and vocabulary sorting. We ablate all these in Table 1. If the reviewer has requests for specific ablations, we are happy to run them.\\n\\nOverall, CCE acts as a plug-in replacement for CE with minimal assumptions. We require a linear/classification layer to precede CE, which is true for almost all deep networks we are aware of. To see true gains from CCE this classification layer needs to cover a large number of classes.\\n\\n> What are the theoretical justifications on why CCE might be much more computationally and memory efficient compared to traditional CE?\", \"cce_is_two_operations_fused_together\": \"matrix multiplication followed and cross-entropy loss. The fused kernel executes exactly the same operations as CE, in fact the two have almost identical theoretical FLOPS. However, the fused CCE kernel reduces memory usage by eliminating the need to store intermediary results. CCE improves computation efficiency by exposing more work at once to the GPU.\\nThe best analogy to this in published literature is FlashAttention, which presents a similar fused kernel for the attention operation. Unlike FlashAttention, CCE does not alter or limit the original operator (CE).\\n\\n> For example, why can't you apply the same chunking strategies used in CCE for traditional CE?\\n\\nCCE doesn\\u2019t use chunking. CCE uses blocking in service of mapping its computation GPU, but this is distinct from chunking and achieves a different goal.\\n\\nChunking strategies can be applied to save memory in the context of traditional CE, as shown by the Liger Kernel and Torch Tune baselines. These come with performance costs and still use considerably more memory than CCE.\\n\\nIt is possible to apply chucking strategies to CCE, but this would not save memory and likely harm performance.\\n\\n> Preliminaries (section 3) does not adequately prepare the reader for the complexity of the notation in section 4.\\n\\nAs suggested by Reviewer HR95 we have added a section in the Appendix to provide more explanation on our notation.\\n\\n> How does CCE compare to CE on the CPU, or non-parallel cases? Are the improvements algorithmic or does it need to take advantage of GPU parallelization strategies?\\n\\nThe memory savings of CCE would directly apply to CPU implementations. In fact, if one were to write a sequential CPU implementation of a fused linear + CE operation, something like CCE would naturally emerge. Computational improvements may transfer to a CPU-parallel implementation too. The blocking strategy used for GPU matrix multiplication is also commonly used in parallel CPU matrix-multiplication algorithms for the same reasons: it makes efficient use of the cache hierarchy. Gradient-filtering would transfer to even a non-parallel case as it still enables work to be skipped, but vocabulary sorting would not be needed in a non-parallel case.\\n\\nWe focused on the parallel case using a GPU as non-parallel or CPU-parallel is simply too slow to train modern models.\"}",
"{\"title\": \"Respond to HR95\", \"comment\": \"We thank the reviewer for their review. We are pleased they found that our work \\u201cidentifies a new challenge brought by the large size of vocabulary of language models\\u201d, that are proposed gradient filtering and vocabulary sorting are \\u201cenlightening\\u201d, and our experiments show the \\u201creduced memory footprint and indistinguishable influence on fine-tuning.\\u201d\\n\\n> The symbols in the derivation of CCE can be clearer. For example, the symbols in page 4 between line 186 and 215, such as $C^T$, $C_{x_i}^T$, $C_X$, and $(C^T E)_X$, look confusing at first glance. It may be helpful to have a table of symbol definition in the appendix.\\n\\nThank you for the suggestion! We have added a new Section A the appendix. Let us know if you have any additional suggestions.\\n\\n> Experiments on how the memory and latency of CCE kernel varies with the vocab size & model family can be added.\\n\\nWe wanted to know this too! We have added benchmarking of Gemma 2 (9 B), Gemma 2 (27 B), Llama 3 (8 B), Mistral NeMo (12 B), and Phi 3.5 mini to the appendix.\\n\\nWe find that as the ration of vocabulary size to hidden size decrease, the latency of CCE increases relative to torch.compile, but it continues to save a large amount of memory. Only for Phi 3.5 mini is CCE slower than torch.compile (11 ms, 50% slower), but it continues to save substantial memory.\\n\\nWhile this difference may seem large, in practice it is largely negligible. In our fine-tuning experiments, CCE increases total training time by 1-2% for Phi 3.5 Mini.\\n\\n> I am also curious about the impact of CCE on LLM pre training.\\n\\nWe have updated the appendix (Section C.1) to include small-scale experiments pre-training Gemma 2 (2B), Llama 3 (8 B), Mistral NeMo (12 B), and Phi 3.5 mini.\\n\\nCCE has identical training-loss curves to torch.compile. However, CCE results in lower probabilities on tokens that are in the validation set but not in the training set and this results in higher validation perplexities. If we examine validation sequences with only tokens that were seen at training time (but still a novel combination of tokens), then validation perplexity matches torch.compile.\\n\\nIf this effect still persists in full-scale pre-training remains an open question.\\n\\n> If an LLM is trained from scratch using CCE, how will the randomly initialized weights influence the gradient filtering and vocab sorting?\\n\\nIn our small-scale pre-training experiments, the randomly initialized weights reduced the effectiveness of gradient filtering and vocabulary sorting. The impact of this depends on the model. For Phi 3.5 Mini, CCE increased total training-time by 25% compared to torch.compile, while Gemma 2 saw an increase of less than 1%.\\n\\nLet us know if we have addressed your concerns.\\n\\nFinally, if you believe this paper should be highlighted at the conference, could you please consider raising your score to reflect that?\"}",
"{\"comment\": \"The additional experiments and explanations have addressed my concern.\\n\\nI would like to raise my score and believe this paper should be highlighted at the conference.\"}",
"{\"comment\": \"I thank the authors for their thorough responses to my questions. I leave my score unchanged.\"}",
"{\"metareview\": \"This paper introduces a method called \\\"Cut Cross-Entropy\\\" (CCE) to reduce the memory consumption of the cross-entropy layer in LLMs. As the vocabulary size grows and various memory optimization techniques are applied to LLMs, memory consumption increasingly shifts from weights and activations to the cross-entropy layer. This paper presents several strategies to mitigate the memory usage of this layer and achieve significant memory savings.\", \"main_strengths\": [\"Very novel insight\", \"Significantly improved performance\"], \"main_weaknesses\": [\"As several reviewers point out, the paper's clarity could be improved\", \"Lack of substantial studies on different architectures, hyperparameters, etc.\"], \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, the authors clarified lots of questions. They also provided several additional experiments: small-scale pre-training on various architectures, ablations on vocabulary size, hidden dimension, etc.,\"}"
]
} |
E4A7KtLB21 | Unbiased Attribution with Intrinsic Information | [
"Zhiyu Zhu",
"Zhibo Jin",
"Jiayu Zhang",
"Jianlong Zhou",
"Fang Chen"
] | The importance of attribution algorithms in the AI field lies in enhancing model transparency, diagnosing and improving models, ensuring fairness, and increasing user understanding. Gradient-based attribution methods have become the most critical because of their high computational efficiency, continuity, wide applicability, and flexibility. However, current gradient-based attribution algorithms require the introduction of additional class information to interpret model decisions, which can lead to issues of information ignorance and extra information. Information ignorance can obscure important features relevant to the current model decision, while extra information introduces irrelevant data that can cause feature leakage in the attribution process. To address these issues, we propose the Attribution with Intrinsic Information (AII) algorithm, which analyzes model decisions without the need for specified class information. Additionally, to better evaluate the potential of current attribution algorithms, we introduce the metrics of insertion confusion and deletion confusion alongside existing mainstream metrics. To continuously advance research in the field of explainable AI (XAI), our algorithm is open-sourced at https://anonymous.4open.science/r/AII-787D/. | [
"Interpretability",
"Attribution"
] | Reject | https://openreview.net/pdf?id=E4A7KtLB21 | https://openreview.net/forum?id=E4A7KtLB21 | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"v5vCAHxNIB",
"mMUISHdLvi",
"ehk0FT27NG",
"eLA95Q9HCA",
"cJggfZizio",
"WsadhdFIhg",
"WBaSwghwqy",
"RgTYOZU1Tm",
"Q9IUr1liE4",
"PGQEDDbi0U",
"Otd5TJfgTB",
"GiZJMWrrfe",
"DEfXfoDMez",
"CkbQD3XLFL",
"6Ft5v9aIjQ",
"3H9YmFO5MP",
"2DZimew7dY"
],
"note_type": [
"official_comment",
"meta_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1733112952910,
1734139876235,
1730668661154,
1729110385989,
1731990097179,
1733154109967,
1733070241207,
1731990134588,
1737523979739,
1733070209660,
1730711695382,
1731990016996,
1731989941444,
1730743027099,
1731989549541,
1733070150667,
1733070060713
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9383/Reviewer_B4oD"
],
[
"ICLR.cc/2025/Conference/Submission9383/Area_Chair_YPRp"
],
[
"ICLR.cc/2025/Conference/Submission9383/Reviewer_B4oD"
],
[
"ICLR.cc/2025/Conference/Submission9383/Reviewer_kTtj"
],
[
"ICLR.cc/2025/Conference/Submission9383/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9383/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9383/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9383/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9383/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9383/Reviewer_emWS"
],
[
"ICLR.cc/2025/Conference/Submission9383/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9383/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9383/Reviewer_57hX"
],
[
"ICLR.cc/2025/Conference/Submission9383/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9383/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9383/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Thank authors for the reply. I acknowledge I have read all the reviews and rebuttal reply. I believe the experiments and the theory is acceptable and reasonable. I will adjust the score during the discussion period with other reviewers. The only concern is the writing, authors may consider a better presentation method in text rather than some images left for readers to imply the definition of some key concepts.\"}",
"{\"metareview\": [\"My recommendation is to reject the paper at this time. My decision stems from the consensus scores across reviewers and the lack of a champion. Given the lack of engagement, I also reviewed the submission personally \\u2013 reading the reviews, responses, and the original submission. In this case, my recommendation is the same as those of the reviewers but for slightly different reasons. In particular, I recognize the importance of the problem but believe that it would benefit from further refinement and development to stand the test of time. What is missing in this case are ties to decision theory and validating feature attribution methods. Given this, I am including a list of papers below that could serve as inspiration to develop the work in the future.\", \"[Logic for Explainable AI](https://arxiv.org/abs/2305.05172) - presents a formal framework that could serve as a foundation for theory.\", \"[A Decision Theoretic Framework for Measuring AI Reliance](https://arxiv.org/abs/2401.15356)- presents a framework for how decision makers may be able to use side information to make better decisions.\", \"[Do Feature Attribution Methods Correctly Attribute Features?](https://openreview.net/forum?id=h4J41lQqaJ3) - includes some test cases (which could be used to test the validity of the current method)\", \"[Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse](https://openreview.net/forum?id=wsWCVrH9dv) - highlights use cases of attribution on tabular datasets (which could be relevant here)\"], \"additional_comments_on_reviewer_discussion\": \"Three reviewers were only able to engage with the submission during the discussion period. Following the rebuttal and author-reviewer discussion, reviewer recommendations did not change substantially \\u2013 and leaned toward rejection. Given the lack of engagement, I also reviewed the submission personally \\u2013 reading the reviews, responses, and the original submission. In this case, my assessment is the same as those of the reviewers.\"}",
"{\"summary\": \"This paper focused on feature-level data attribution and pointed out two possible issue in traditional gradient-based methods: Information Ignorance and Extra Information. The paper proposed Attribution with Intrinsic Information (AII), a new feature attribution method which accumulated the gradients for the sum of log predicted probabilities for all classes. Furthermore, the paper also proposed evaluation methods\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper did some evaluation to show that the method proposed (AII) outperformed other traditional methods.\", \"The proposed method empirically resolve the problem spotted by the paper (i.e., Information Ignorance and Extra Information)\"], \"weaknesses\": [\"There should be a more intuitive and clear-to-understand illustration for Information Ignorance, Extra Information, what does the traditional algorithms care about and what does AII care about. It could be a diagram or a Venn diagram.\", \"It still hard to understand why AII could resolve the Extra Information (and the cause of extra information).\", \"More examples showing the problems (Information Ignorance, Extra Information) are resolved will be helpful.\", \"Some small typos\", \"Table 2 caption: U-INS, U-DEL -> F-INS, F-DEL\", \"The experiment is carried out on image classification with some very popular dataset and models, it could be biased.\"], \"questions\": [\"What\\u2019s the difference between high-confidence and low-confidence? Like high-confidence cover 90% and low-confidence cover 10%? Or it\\u2019s a 50-50 separate.\", \"Is there any possibility that the issues could be presented by math terms. Current illustration is clear (emperically) but not well-defined by anything other than plain text.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes a new method to perform feature attribution called AII. They also suggest an approach (CFA) to find the correct null values for an image which would more closely emulate removal of superpixels from an image relative to a human. The AII algorithm they claim mitigates two issues other attribution methods have namely, feature ignorance and extra information.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Interesting setting\", \"Reasonably thorough experiments\"], \"weaknesses\": [\"Poor presentation\", \"No details given in the figure captions (1-5) of what models were used for the task and in some cases also the attribution method is missing. The text referring to them also does not contain this information.\", \"Presented solutions such as CFA are expensive\", \"Inconsistent and sometimes erroneous notation\", \"AII algorithm description extremely curt which is their main contribution\", \"No discussion or comparison with contrastive/counterfactual explanation methods\"], \"questions\": \"The way the extra information part is written is confusing. The writing implies that somehow the including the class information during attribution is the extra information. What they mean by extra information only becomes clear when the discuss Figure 2, where it is more a statement about the input space and superfluous features selected therein.\\n\\nAII algorithm description is quite short given that it is one of their main contributions. There is no discussion on why the 2 issues of extra information and information ignorance are mitigated by their approach. The approach essentially takes an average gradient over all class predictions, which also seems to have limited innovation (given that other averaging approaches such as Integrated Gradients etc. although somewhat different already exist).\\n\\nAlso the AII algorithm uses an adversarial attack strategy which is reminiscent of contrastive/counterfactual explanation methods (viz. contrastive explanations method (CEM), etc.). But no discussion is provided relative to those.\\n\\nThe CFA problem posed in Equation 2 is a hard optimization problem. To solve it for each image to find the null pixel values seems quite computationally intensive.\\n\\nThey consider just turning pixels black as the only solution for feature removal. However, that is not true. The idea in most of those methods is to insert a null value (also done in SHAP) which may be black pixels but could also be another value say average pixel value across the image or other images in the dataset. This phenomenon is not a new insight. \\n\\nEquation 2, the sum uses index i but the term has j in it. $x_1, ..., x_n$ are not defined. Although I presume they imply pixel values.\\n\\n$x^t$ I presume implies the $t^{th}$ image, but it is sampled from a univariate uniform distribution. If they are pixels then a subscript has become a superscript.\\n\\nExperiments are reasonably thorough given that they compare against a bunch of different methods also using the setup of previous studies. However, I would have also liked to see timing results as the optimizations they propose for CFA and AII seem to be much more expensive than the attribution baselines.\", \"minor_comment\": \"> \\\"The larger the\\nattribution result, the more important that dimension is for the model\\u2019s decision.\\\"\\nI think this statement has to be qualified by saying larger the *absolute value* of the attribution ... Because a larger negative value also indicates an important feature\\n\\n\\nOverall, the paper requires significant rewriting in my opinion and is not currently ready for primetime.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"**Response to Weakness 1: Illustration for Key Concepts**\\n\\nThank you for the suggestion to provide a more intuitive and visual illustration of the key concepts. We have already provided detailed textual definitions of \\\"Information Ignorance\\\" and \\\"Extra Information\\\" in Section 1 (Introduction, lines 52-54) and Section 3.2 (lines 160 and 210). In brief, \\\"Information Ignorance\\\" refers to features that should be attributed but are omitted, while \\\"Extra Information\\\" involves attributing irrelevant features. \\n\\nTo further enhance understanding, we will include diagrams (e.g., Venn diagrams) in a future revision to better illustrate the differences between traditional methods and AII in terms of what features they attribute.\\n\\n---\\n\\n**Response to Weakness 2: Resolving Extra Information**\\n\\nWe have elaborated on the causes and resolution of \\\"Extra Information\\\" in **Global Response 1**, including a mathematical formalization. In summary:\\n\\n- **Cause of Extra Information:** Traditional attribution methods accumulate gradients based on predefined target classes. This introduces assumptions about task-relevant features, leading to the inclusion of irrelevant information.\\n- **How AII Resolves It:** AII eliminates the reliance on target class-specific gradients by redefining gradient accumulation to consider all class outputs, ensuring task-irrelevant features are excluded.\\n\\nWe hope this explanation addresses the reviewer's concern.\\n\\n---\\n\\n**Response to Weakness 3: Additional Examples**\\n\\nFigures 1\\u20134 in the manuscript provide comprehensive examples of how our method addresses \\\"Information Ignorance\\\" and \\\"Extra Information.\\\" Moreover, additional examples can be found in our supplementary materials at [https://anonymous.4open.science/r/AII-787D/rebuttal/]. These examples further demonstrate the effectiveness of AII in resolving these issues.\\n\\n---\\n\\n**Response to Weakness 4: Typographical Errors**\\n\\nThank you for pointing out the typographical error in the caption of Table 2. We have corrected \\\"U-INS, U-DEL\\\" to \\\"F-INS, F-DEL\\\" and conducted a thorough review of the manuscript to eliminate other potential typos.\\n\\n---\\n\\n**Response to Weakness 5: Dataset and Model Bias**\\n\\nOur method is designed as a general attribution algorithm, and we chose well-known datasets and models to ensure transparency and reproducibility. Additionally, the baseline methods we compared against were also evaluated on these datasets, ensuring fairness in our experimental comparisons. The use of publicly available datasets and models is intended to provide a reliable and unbiased evaluation of our approach.\\n\\n---\\n\\n**Response to Question 1: High-Confidence vs. Low-Confidence**\\n\\nHigh-confidence data refers to instances where the model is confident in its decision, implying fewer features interfere with the decision-making process. Low-confidence data, on the other hand, represents cases where the model's confidence is low, indicating that more features may be influencing the decision. This distinction is illustrated in Fig.1 of the manuscript.\\n\\nTo address the question, we conducted additional experiments using a 50% confidence threshold as a dividing line. The results, summarized below, demonstrate that AII consistently achieves superior performance compared to other methods, even under this threshold:\\n\\n| Method | Conf < 50% | Conf < 50% | Conf \\u2265 50% | Conf \\u2265 50% |\\n|-------------|------------|------------|------------|------------|\\n| | INS | DEL | INS | DEL |\\n| SM | 0.0235 | 0.0179 | 0.0532 | 0.0618 |\\n| SG | 0.0250 | 0.0227 | 0.0396 | 0.0757 |\\n| MFABA | 0.0876 | 0.0264 | 0.2548 | 0.0664 |\\n| IG | 0.0306 | 0.0247 | 0.0570 | 0.0910 |\\n| GIG | 0.0284 | 0.0251 | 0.0597 | 0.0865 |\\n| FIG | 0.0236 | 0.0293 | 0.0824 | 0.0726 |\\n| EG | 0.1206 | 0.1324 | 0.3002 | 0.3153 |\\n| DeepLIFT | 0.0280 | 0.0266 | 0.0758 | 0.0854 |\\n| BIG | 0.0751 | 0.0275 | 0.1597 | 0.0635 |\\n| AtteXplore | 0.1193 | 0.0207 | 0.3395 | 0.0451 |\\n| AGI | 0.1145 | 0.0198 | 0.3609 | 0.0541 |\\n| **AII** | **0.2104** | **0.0226** | **0.5056** | **0.0709** |\\n\\n---\\n\\n**Response to Question 2: Mathematical Formalization**\\n\\nWe appreciate the suggestion to include more formal definitions of the issues. As discussed in **Global Response 1**, we have provided mathematical descriptions of \\\"Information Ignorance\\\" and \\\"Extra Information.\\\" These definitions will be incorporated into the main text in a future revision to further clarify and strengthen the paper.\"}",
"{\"comment\": \"Dear Reviewer B4oD,\\n\\nThank you for your recognition of our theoretical contributions and experimental results. We sincerely appreciate your thoughtful suggestions regarding the presentation of our key concepts. We assure you that in the final version of the manuscript, we will improve the textual descriptions and their integration with visual illustrations to ensure better clarity and readability.\\n\\nWe humbly request that you reconsider the score during the discussion period, taking into account our commitment to addressing these concerns in the final version.\\n\\nThank you once again for your constructive feedback and support.\\n\\nBest regards, \\nThe Authors of Submission 9383\"}",
"{\"comment\": \"Dear Reviewer kTtj,\\n\\nThank you for your detailed comments and observations, which have greatly contributed to improving our Submission 9383. \\n\\nIn our rebuttal, we have addressed your concerns as follows: \\n\\n1. **Extra Information Clarification**: We expanded on this concept in our global response, highlighting its connection to task-irrelevant features and provided examples to demonstrate its implications. \\n2. **AII Algorithm**: We elaborated on its novelty and how it mitigates issues of \\\"Information Ignorance\\\" and \\\"Extra Information\\\" by leveraging intrinsic information without relying on target classes. \\n3. **Comparison with Contrastive Methods**: We included a discussion on methods like CEM, explaining how AII adheres to attribution axioms and avoids reliance on external sampling. \\n4. **Computational Cost**: We clarified that CFA computations are performed once per model and reused, with minimal impact on overall cost. \\n5. **Timing Results**: We provided a comparison of frames-per-second (FPS) performance, demonstrating AII's comparable efficiency to AGI while achieving superior results. \\n\\nWe hope these responses address your concerns. If you have any further questions, we would be happy to discuss them. We also humbly request you to reconsider your evaluation in light of our clarifications. \\n\\nWarm regards, \\nThe Authors of Submission 9383\"}",
"{\"comment\": \"Since the Weaknesses and Questions raised are similar, we have primarily addressed Reviewer kTtj's questions.\\n\\n**Response to Question 1: Extra Information and Class Information**\\n\\nWe appreciate the reviewer\\u2019s observation regarding the clarity of the \\\"Extra Information\\\" concept. In **Global Response 1**, we have provided a detailed explanation, including mathematical definitions, to clarify the concepts of \\\"Information Ignorance\\\" and \\\"Extra Information.\\\"\", \"to_summarize_briefly\": \"\\\"Extra Information\\\" occurs when attribution methods accumulate gradients under the assumption of a specific target class, leading to the inclusion of irrelevant features. In Fig.2, this manifests as attribution to superfluous regions, highlighting how predefined class assumptions can bias the results. This explanation will be expanded upon in the revised manuscript.\\n\\n---\\n\\n**Response to Question 2: Description of the AII Algorithm**\\n\\nThank you for highlighting the need for a more detailed discussion of how AII mitigates the issues of \\\"Information Ignorance\\\" and \\\"Extra Information.\\\" Unlike existing methods, AII introduces adversarial attack strategies to remove the dependency on predefined target classes, a key source of \\\"Extra Information.\\\" While averaging methods such as Integrated Gradients rely on class information, AII attributes features without such dependency, offering a novel contribution. Furthermore, we propose improvements to the commonly used Insertion and Deletion Scores to address their limitations, ensuring a more robust evaluation of attribution methods. Our experimental results consistently demonstrate the superior performance of AII across multiple metrics and dimensions.\\n\\n---\\n\\n**Response to Question 3: Comparisons with CEM**\\n\\nWe appreciate the reviewer's comment on the similarity between our method and contrastive explanation methods such as CEM. However, CEM does not satisfy the attribution axioms, whereas AII strictly adheres to them. Additionally, CEM relies on introducing samples similar to the current one, which risks adding \\\"Extra Information.\\\" This makes it unclear whether the explanation pertains to the original sample or is influenced by the added samples. Our experiments ensure fairness by comparing methods that do not involve external sampling, demonstrating the effectiveness of AII. We will include a discussion on CEM in the revised paper for completeness.\\n\\n---\\n\\n**Response to Question 4: Computational Cost of CFA**\\n\\nThank you for your concern regarding the computational cost of CFA. In practice, the cost is not significant: CFA typically requires only 20 seconds per calculation and at most 90 seconds in extreme cases. Moreover, the null pixel values are computed once per model and reused, avoiding repeated calculations. This computational expense is negligible compared to the hundreds or thousands of hours required for model training.\\n\\n---\\n\\n**Response to Question 5: Null Value for Feature Removal**\\n\\nWhile the use of black pixels or average pixel values as null values has been explored, our main contribution lies in addressing \\\"Information Ignorance\\\" and \\\"Extra Information.\\\" Our entropy-maximization approach ensures that the chosen null value represents maximum uncertainty, aligning with mathematical definitions of entropy. This provides a theoretically grounded method for feature removal, which we believe is more robust than other heuristics.\\n\\n---\\n\\n**Response to Question 6: Typographical Issues in Equation 2**\\n\\nThank you for pointing out the errors in Equation 2. We have corrected the summation index from $i=1$ to $j=1$ and will provide more detailed descriptions for terms such as $x_1, \\\\ldots, x_n$ and $x_t$. The iterative relationship for $x^0$ is already defined, ensuring the equation is complete. These revisions will be included in the updated manuscript.\\n\\n---\\n\\n**Response to Question 7: Timing Results for AII and CFA**\\n\\nWe agree that computational efficiency is important. To address this, we measured the FPS (frames per second) for AII and other baseline methods. The results are as follows:\\n\\n| Method | FPS |\\n|-------------|--------------|\\n| SM | 66.52 |\\n| IG | 12.13 |\\n| FIG | 65.91 |\\n| BIG | 0.24 |\\n| MFABA | 29.28 |\\n| AttExplore | 0.29 |\\n| GIG | 287.64 |\\n| DeepLIFT | 5.43 |\\n| SG | 11.67 |\\n| EG | 41.46 |\\n| AGI | 0.14 |\\n| **AII** | **0.14** |\\n\\nAs shown, AII has comparable computational costs to AGI, while achieving significant performance improvements. Given that explainability prioritizes faithfulness and accuracy, we believe the additional computational expense is justified.\\n\\n---\\n\\n**Response to Minor Comment: Absolute Value of Attribution**\\n\\nWe agree with the reviewer\\u2019s suggestion and will revise the statement to clarify that larger absolute values of attributions indicate more important features for the model\\u2019s decision.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Dear Reviewer B4oD,\\n\\nWe appreciate your thorough review and valuable suggestions on improving the clarity and presentation of our Submission 9383. \\n\\nIn response to your feedback, we have made the following clarifications: \\n\\n1. **Illustration of Key Concepts**: We provided detailed definitions and proposed including diagrams (e.g., Venn diagrams) in future revisions for better visualization. \\n2. **Resolving Extra Information**: We explained how AII eliminates reliance on class-specific gradients, ensuring task-relevant features are accurately attributed. \\n3. **Dataset Bias**: We clarified that widely-used datasets and models were chosen to ensure fair and reproducible comparisons. \\n4. **Typographical Errors**: Errors in Table 2 were corrected, and the manuscript was thoroughly reviewed for other typos. \\n\\nIf you have additional questions or suggestions, we are open to further discussions. We also kindly ask you to reevaluate our paper considering these updates. \\n\\nWarm regards, \\nThe Authors of Submission 9383\"}",
"{\"summary\": \"This paper introduces an approach named AII to address biased attributions given by existing approaches. The sources of biased attribution are categorized into information ignorance and recognition of irrelevant features. This work mainly proposes two concrete improvements: (1) more advanced feature removal through entropy maximization, which could be used to develop an unbiased evaluation metric, and (2) considering the information of all classes during attribution, which is used in AII.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The phenomenon of two attribution biases, including omission of important features and incorrect identification of irrelevant features as significant, is explained well with clear examples.\\n\\nThe extensive experiments show the great effectiveness of the proposed approach, primarily due to the high insertion score. This could be potentially explained from the provided examples as the explanations given by AII appear to be less noisy and concentrate more on important areas.\", \"weaknesses\": \"I am unsure about the settings of the example in Fig.2. This example tries to show that other algorithms wrongly capture the irrelevant square region. From the understanding of humans, the square region is indeed irrelevant. But the attribution methods are designed to explain model behaviours instead of human understanding, and the model might be erroneous and actually use that region for classification. That is, the example might also need to prove that, the model does not leverage that square region. Maybe it could be done by control experiments.\\n\\nI like the interesting case presented in Fig.1, where the confidence is low due to ambiguity. However, it seems those cases could not be captured by the metrics used in experiments, because the calculating of scores does not involve reconstructing the exact same low-confidence prediction. Also it might be helpful to replace Fig. 6 or 7 with an example in such scenario.\\n\\nI understand that the existing feature removal needs to be improved, but I don\\u2019t have a good intuition about why entropy maximization would help. Maybe more intuition could be added around lines 300-304.\\n\\nI am concerned that the AII algorithm does not help explain less ambiguous figures such as Fig.3. And, for instance, in the MNIST data, it seems the large majority of figures are actually not that ambiguous.\", \"questions\": \"The observation in Fig.3 is interesting, and I wonder if this is a feature of the model or a \\u201cbug\\u201d in the attribution approach that needs to be addressed. If the same thing happens in the example of Fig.1, and considering the proposed AII algorithm is essentially summing all those attributions (maps), the attribution in Fig.1 should only highlight the dog area?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"**Response to Weakness 1: Fig.2 Settings and Irrelevant Regions**\\n\\nThank you for pointing out the concern regarding Fig.2. The issue raised has been discussed in the work by Shah et al. [1], which highlights the phenomenon of \\\"feature leakage\\\" and how attribution methods may mistakenly highlight irrelevant features. \\n\\nIn addition, the confidence score for this example is only 0.5786, while most examples in the MNIST dataset typically exhibit confidence scores above 0.95. Therefore, it is not appropriate to attribute this instance solely based on the current class, as traditional attribution methods require a predefined class to operate.\", \"reference\": \"[1] Shah, Harshay, Prateek Jain, and Praneeth Netrapalli. \\\"Do input gradients highlight discriminative features?\\\" Advances in Neural Information Processing Systems 34 (2021): 2046-2059.\\n\\n---\\n\\n**Response to Weakness 2: Metrics and Low-Confidence Examples**\\n\\nWe appreciate your suggestion to include more examples like Fig.1 in the paper. Additional examples demonstrating similar low-confidence scenarios are provided in our supplementary materials and can be accessed at [https://anonymous.4open.science/r/AII-787D/rebuttal/]. These examples further illustrate how our method effectively handles ambiguity in attribution.\\n\\n---\\n\\n**Response to Weakness 3: Intuition Behind Entropy Maximization**\\n\\nEntropy represents uncertainty in a model's decision. Increasing entropy during feature replacement simulates a process where the model becomes progressively uncertain about its predictions. We believe this better approximates the removal of a feature compared to directly replacing it with a zero value, as zero replacement can inadvertently introduce biases depending on the model's learned features. This approach aligns better with the intuition that higher uncertainty indicates the effective removal of the feature's influence.\\n\\n---\\n\\n**Response to Weakness 4: Less Ambiguous Figures and Fig.3**\\n\\nRegarding the concern about Fig.3, it is indeed based on the original MNIST resolution (32\\u00d732). The original image used in Fig.3 is available in our open-source code repository at [https://anonymous.4open.science/r/AII-787D/rebuttal/]. While the MNIST dataset predominantly consists of less ambiguous examples, Fig.3 demonstrates a broader limitation of existing attribution methods: reliance on predefined classes. Our method addresses this limitation by eliminating the need for such specifications, resulting in more robust and generalized attributions.\\n\\n---\\n\\n**Response to Question 1: Observations in Fig.3 and Implications for Fig.1**\\n\\nThe issue highlighted in Fig.3 reflects a limitation in current attribution methods, not a feature of the model itself. These methods assume that specifying a class will yield meaningful attribution for that class, which we demonstrate to be problematic. For instance, in Fig.3, the attribution results for a digit labeled \\\"7\\\" remain highly similar across all other class labels, suggesting that the process of class specification is redundant.\\n\\nOur AII algorithm avoids this limitation by performing attribution without requiring class specification, achieving superior results as shown in our experiments. In the context of Fig.1, it would not be reasonable to highlight only the dog area, as the confidence for the \\\"dog\\\" label is 0.53, with the remaining 0.47 confidence likely associated with the \\\"cat.\\\" Simply focusing on the dog or cat alone would fail to explain why the \\\"dog\\\" confidence is only 0.53. Our method effectively addresses this issue by incorporating contributions from all relevant features, resulting in more faithful explanations.\"}",
"{\"comment\": \"**Response to Weakness 1 (W1): Definitions of Key Concepts**\\n\\nWe acknowledge the reviewer's concern about the clarity of the definitions for \\\"Information Ignorance\\\" and \\\"Extra Information.\\\" In response, we have provided detailed explanations in **Global Response 1**, where the definitions and mathematical formalizations of these concepts are presented. To summarize briefly:\\n\\n- **Information Ignorance** refers to the omission of features from non-target classes, leading to biased attribution.\\n- **Extra Information** denotes the erroneous inclusion of irrelevant features in the attribution process.\\n\\n---\\n\\n**Response to Weakness 2 (W2): Explanation of Mitigation**\\n\\nOur method addresses the problems of \\\"Information Ignorance\\\" and \\\"Extra Information\\\" by removing the dependency on a predefined target class during attribution. As highlighted in the manuscript, traditional methods rely on a specified class for attribution, which inherently excludes information from other classes, leading to \\\"Information Ignorance.\\\" Additionally, if the chosen class is imperfect (e.g., has low confidence), the attribution process introduces \\\"Extra Information,\\\" contaminating the results, as illustrated in Figure 1.\\n\\nThe Attribution with Intrinsic Information (AII) algorithm avoids these issues by using a gradient accumulation method that considers the contributions of all class outputs without prioritizing a single class. This ensures a more balanced and unbiased attribution, as shown in our experimental results.\\n\\n---\\n\\n**Response to Weakness 3 (W3): Clarification on Loss Function**\\n\\nThe statement in lines 157-158 refers to the predominant focus of the loss function $ L(f(x), y) $ on the target class $ y $. While it is true that cross-entropy loss considers the logits for all classes, its primary optimization direction is to increase the output for the target class $ y $ while suppressing the outputs for all other classes. This can be understood from the gradient perspective: the gradient of the loss function with respect to $ f(x)_y $ increases $ f(x)_y $ \\n\\nHowever, the gradients with respect to $ f(x)_{j \\\\neq y} $ decrease those outputs. Thus, the optimization is inherently dominated by the target class $ y $, which can lead to \\\"Information Ignorance\\\" by neglecting contributions from other classes.\\n\\nWe appreciate the reviewer's attention to detail and hope this explanation clarifies the intended meaning of our statement.\"}",
"{\"summary\": \"The paper introduces a novel feature attribution method, Attribution with Intrinsic Information (AII), designed to address challenges in current feature attribution algorithms, specifically regarding \\u201cignored information\\u201d and \\u201cextra information.\\u201d AII accumulates gradients across the prediction vector, which may allow attribution map to capture features contributing to competing classes. Additionally, the paper proposes a a new evaluation metric that addresses a problem of the existing insertion and deletion metric. The effectiveness of AII is demonstrated through several experiments on image classification.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This paper presents a novel method for feature attribution.\\n\\n2. The experiments include a variety of baseline feature attribution methods.\\n\\n3. The code is available.\", \"weaknesses\": \"1. The key concepts, information ignorance and extra information, are not well-defined in the paper. Instead, these concepts are almost completely illustrated through examples in Figure 1 and Figure 2. While I appreciate the illustrative examples, it is unclear what exactly do they mean in a general setup.\\n\\n2. The proposed method is described within half a page in Section 3.4. And it's unclear how does the proposed method mitigate the claimed problems (in fact, this may not be possible without clear definitions of the two problems). \\n\\n3. Some claims do not seem accurate. For example, in line 157-158, I do not understand why \\\"the loss function L(f (x), y) only contains the class information of y\\\" since for cross-entropy loss, the loss also depends on the output logits for other classes.\\n\\n\\nOverall, the claimed problems are not well-defined, which undermines the motivation of the proposed method.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Global Response 1: Definitions of \\\"Information Ignorance\\\" and \\\"Extra Information\\\"\", \"comment\": \"We have provided clear and detailed definitions of \\\"Information Ignorance\\\" and \\\"Extra Information\\\" in Section 1 (Introduction, third paragraph, lines 52-54) and Section 3.2 (lines 160 and 210) of the manuscript. To briefly summarize:\\n\\n- **Information Ignorance** refers to scenarios where features that should contribute to attribution are omitted. \\n- **Extra Information** denotes situations where features that are irrelevant to the task are mistakenly attributed as important.\\n\\nFor clarity, we restate the definitions here:\\n\\n1. **Information Ignorance (Information Omission):**\\n This phenomenon occurs when attribution methods fail to account for features from classes other than the target class, despite their potential influence on the model's decision-making process. For instance, when the target class is \\\"dog,\\\" traditional attribution methods may completely ignore the presence of \\\"cat\\\" features in the image, even though the model considers both \\\"dog\\\" and \\\"cat\\\" features during decision-making. Such omissions can lead to biased attributions.\\n\\n2. **Extra Information (Irrelevant Features):**\\n This occurs when attribution methods mistakenly attribute irrelevant features to the task at hand. For example, an attribution method might highlight an unrelated region of an image as significant to the target class due to its reliance on specific loss functions, such as the maximum class output or cross-entropy, which introduce extraneous assumptions and biases.\", \"we_provide_the_following_mathematical_formalizations\": \"Let $a_i$ represent the true attribution score of a feature and $\\\\tilde{a}_i$ denote the predicted attribution score. The set of important features is defined as $\\\\Phi = \\\\{i \\\\mid a_i \\\\geq \\\\tau\\\\}$, where $\\\\tau$ is a threshold indicating activation intensity.\\n\\n- **Information Ignorance:**\\n Exists if there is a set $\\\\varphi = \\\\{i \\\\mid i \\\\in \\\\Phi \\\\text{ and } \\\\tilde{a}_i < \\\\tau\\\\}$ with $|\\\\varphi| \\\\geq k$, where $k$ indicates the degree of feature omission. Larger $k$ implies more significant omission.\\n\\n- **Extra Information:**\\n Occurs if there is a set $\\\\varphi = \\\\{i \\\\mid i \\\\notin \\\\Phi \\\\text{ and } \\\\tilde{a}_i \\\\geq \\\\tau\\\\}$ with $|\\\\varphi| \\\\geq k$.\"}",
"{\"comment\": \"Dear Reviewer emWS,\\n\\nThank you for your detailed review and constructive feedback on our Submission 9383. Your comments on our examples and methods were very insightful. \\n\\nIn response, we have addressed the following: \\n\\n1. **Fig. 2 Example**: We clarified that the square region does not influence the model\\u2019s decision, as supported by control experiments and confidence scores. \\n2. **Low-Confidence Cases**: Additional examples and explanations were included in our supplementary material to demonstrate AII's ability to handle ambiguity effectively. \\n3. **Entropy Maximization Intuition**: We elaborated on how entropy maximization improves feature removal by representing maximum uncertainty, providing a theoretical basis for this approach. \\n\\nWe hope these clarifications address your concerns. If there are any further questions or suggestions, we are happy to discuss them. We also kindly request you to reconsider your evaluation in light of these improvements. \\n\\nWarm regards, \\n\\nThe Authors of Submission 9383\"}",
"{\"comment\": \"Dear Reviewer 57hX,\\n\\nThank you for your insightful feedback on the Submission 9383, *\\\"Unbiased Attribution with Intrinsic Information.\\\"* We greatly appreciate the time you took to review our work. \\n\\nWe have carefully addressed your concerns, particularly: \\n\\n1. **Clarity of Key Concepts**: We provided formal definitions and mathematical formalizations of \\\"Information Ignorance\\\" and \\\"Extra Information\\\" in our responses and clarified their distinctions. \\n2. **Method Explanation**: We expanded our explanation of how AII mitigates the identified challenges by removing reliance on class-specific gradients and leveraging intrinsic information. \\n3. **Loss Function Clarification**: We elaborated on the statement about the cross-entropy loss, highlighting its target-class prioritization as a source of bias. \\n\\nIf you have any additional concerns or suggestions, we would be happy to discuss them further. Additionally, we kindly ask you to reconsider your evaluation of our paper in light of our detailed rebuttal and clarifications. \\n\\nWarm regards, \\n\\nThe Authors of Submission 9383\"}"
]
} |
E48QvQppIN | Bayesian Optimization of Antibodies Informed by a Generative Model of Evolving Sequences | [
"Alan Nawzad Amin",
"Nate Gruver",
"Yilun Kuang",
"Yucen Lily Li",
"Hunter Elliott",
"Calvin McCarter",
"Aniruddh Raghu",
"Peyton Greenside",
"Andrew Gordon Wilson"
] | To build effective therapeutics, biologists iteratively mutate antibody sequences to improve binding and stability. Proposed mutations can be informed by previous measurements or by learning from large antibody databases to predict only typical antibodies. Unfortunately, the space of typical antibodies is enormous to search, and experiments often fail to find suitable antibodies on a budget. We introduce Clone-informed Bayesian Optimization (CloneBO), a Bayesian optimization procedure that efficiently optimizes antibodies in the lab by teaching a generative model how our immune system optimizes antibodies. Our immune system makes antibodies by iteratively evolving specific portions of their sequences to bind their target strongly and stably, resulting in a set of related, evolving sequences known as a *clonal family*. We train a large language model, CloneLM, on hundreds of thousands of clonal families and use it to design sequences with mutations that are most likely to optimize an antibody within the human immune system. We propose to guide our designs to fit previous measurements with a twisted sequential Monte Carlo procedure. We show that CloneBO optimizes antibodies substantially more efficiently than previous methods in realistic *in silico* experiments and designs stronger and more stable binders in *in vitro* wet lab experiments. | [
"Bayesian optimization",
"generative model",
"antibody",
"biological sequence"
] | Accept (Spotlight) | https://openreview.net/pdf?id=E48QvQppIN | https://openreview.net/forum?id=E48QvQppIN | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xDirDL32Dx",
"l2IfIv4uHT",
"g7qZpbDMQL",
"eBIbafh3mh",
"cRkQGG2qkY",
"a08bwiqbVA",
"LndvJb15je",
"Izf3GzrmLi",
"H18vqD2mCb",
"DzaO3BNhlo",
"CsyyoxL0cd",
"BSkYMeFtw7",
"Amfb3BnOj9",
"0XiptrxbFQ"
],
"note_type": [
"official_comment",
"official_comment",
"meta_review",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732373399012,
1732209696119,
1734762034873,
1737524145732,
1732209410887,
1732209566273,
1730719893815,
1730487026632,
1730304891404,
1732209673228,
1730671273343,
1732610808662,
1732209472055,
1732209597171
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission11784/Reviewer_Bdzt"
],
[
"ICLR.cc/2025/Conference/Submission11784/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11784/Area_Chair_xUSn"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission11784/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11784/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11784/Reviewer_Bdzt"
],
[
"ICLR.cc/2025/Conference/Submission11784/Reviewer_gX85"
],
[
"ICLR.cc/2025/Conference/Submission11784/Reviewer_mvea"
],
[
"ICLR.cc/2025/Conference/Submission11784/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11784/Reviewer_9Ldu"
],
[
"ICLR.cc/2025/Conference/Submission11784/Reviewer_mvea"
],
[
"ICLR.cc/2025/Conference/Submission11784/Authors"
],
[
"ICLR.cc/2025/Conference/Submission11784/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"I have read the responses and raised my score. I strongly recommend the authors include tables for the other figures and put them in the main text.\"}",
"{\"title\": \"Comment 2\", \"comment\": \">What is the edit distance distribution for the in-vitro designs? In the method description (Section 6.4) you state \\\"iteratively optimize F(X) over the top substitution for up to 3 substitutions\\\", does this mean that designs can only have maximum edit distance of 3 in your experiments? If yes were the baselines constrained to the same edit distance?\\n\\nYes, our designs have a maximum edit distance of 3. We considered this largely as a technique to limit using large amounts of compute optimizing rather than a technique to regularize our designs. First, even without the explicit limit, our designs often did not end up more than 3 mutations away: of the 200 sequences designed for testing in vitro roughly one third had an edit distance of 1, one third an edit distance of 2 and one third and edit distance of 3; note the best performing binding and stability sequences both had an edit distance of 1. In a new experiment in the middle column of Fig. 14, we note that CloneBO roughly performs the same with an edit threshold of $L=1$, $3$, or $5$.\\n\\nThe baselines all consider different strategies to regularize their designs to be near previous sequences, which we did not modify. LaMBO for example considers a penalty on moving too far in the latent space, and in practice only suggested sequences with edit distance 1 in vitro.\", \"citations\": \"Olsen, Tobias H., Iain H. Moal, and Charlotte M. Deane. 2024. \\u201cAddressing the Antibody Germline Bias and Its Effect on Language Models for Improved Antibody Design.\\u201d bioRxiv. https://doi.org/10.1101/2024.02.02.578678.\\n\\nOlsen, Tobias H., Iain H. Moal, and Charlotte M. Deane. 2022. \\u201cAbLang: An Antibody Language Model for Completing Antibody Sequences.\\u201d Bioinformatics Advances 2 (1): vbac046.\"}",
"{\"metareview\": \"This paper proposes Clone-informed Bayesian Optimization (CloneBO), a Bayesian optimization procedure for antibody design. The interesting innovation comes from the way they capture the prior antibody distribution: They train an LLM on clonal families to mimic how the immune system diversifies antibodies. Notably, evaluations are performed in both simulated and **in vitro** experiments, showcasing the strong empirical potential of the proposed framework.\", \"additional_comments_on_reviewer_discussion\": \"NA\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}",
"{\"title\": \"Comment\", \"comment\": \"Thank you for your thoughtful review! We address each of your questions and suggestions below. We have also modified the figures of the text to make them more aesthetically pleasing without changing their data.\\n\\n\\n>The experiment seems weak. Many influential works in recent years are not taken into comparison, such as[1][2]. They authors claim that structure based de novo design cannot make use of previous measurements and must have access to structure so they are not suitable for this task, which I highly suspect. Antibody is a specific protein type that highly reliable on its structure to perform. Even though stucture data are scarse, authors should demonstrate CloneBO\\u2019s supriority by showing that using massive sequence data can lead to higher performance only using sequence.\\n\\nUnfortunately the majority of structure design methods are not immediately applicable to all iterative design settings and even methods such as [1, 2] come with some conceptual downsides. Nevertheless we were also interested in the question of comparing them when they are applicable, so, as we describe below, in App C.4 we built a setting in which structure-based methods can potentially be used for iterative design, compared to [1], and did not see them outperform methods built for iterative design. \\n\\nWhen making a drug, there are a huge number of methods that are built to find \\u201chits\\u201d \\u2013 molecules that have a bit of activity. In almost every case, these \\u201chits\\u201d have too little activity or stability to act as a drug, so they require optimization; techniques for optimization are distinct from those that are used in practice for finding hits and this is the drug design step we target with CloneBO. The vast majority of structure-based design methods target the former problem \\u2013 they take in a structure of a bound antibody and return designed sequences \\u2013 this is pointed out by the reviewer\\u2019s citations [1, 2]. Since these methods target a different problem, they are not immediately applicable to iterative design.\\n\\n[1, 2] adapt de novo structure design methods to suggest mutations to existing sequences. However, they still suffer some conceptual downsides compared to our baselines built for iterative design: (1) they require a co-crystal structure as a starting point, (2) they directly optimize for binding and have no obvious way to optimize for other properties such as expressibility or stability, (3) there is no obvious way for them to learn from previous measurements, (4) they may fail if a crystal structure is not a good representation of binding dynamics in the lab.\\n\\nOur target in Fig 3 and 4 did not have a co crystal structure so we could not compare to these methods. We therefore devised another setting to compare to structure-based methods in App C.4 \\u2013 optimizing for SARS CoV binding. We took an oracle used by Jin et al., 2022 to validate their structure-based design method and sought to optimize binding for sequences with available co-crystal structures. We compared CloneBO and our baselines to DiffAb [1]; we could not compare to GeoAB [2] as they do not have available code for iterative design. We used a greedy optimization method to optimize sequences with mutations suggested by DiffAb. Note in 3 / 6 cases we did not have a starting structure to give DiffAb. We see DiffAb underperformed both CloneBO and the baseline LaMBO; this could be because (4) structure is a poor prior for what this oracle is measuring, or (3) CloneBO and LaMBO are able to find and exploit useful patterns in the previous measurements while the DiffAb optimization routine is not.\\n\\n>Does the optimization process in Fig. 3a converge? It seems that fitness is still rising in the end. See weaknesses.\\n\\nWhen optimizing an antibody, one usually begins with a fixed budget and is interested in maximizing the improvement they can get with that budget. We mimic this setting in our experiments in Fig 3 \\u2013 we have a budget of 100 steps to optimize the antibody as best we can. If we were given unlimited steps, all methods in Fig 3a would converge to the best sequence as they would test every realistic antibody sequence.\\n\\n>The representation is vague. Without any specific table, it\\u2019s hard to comprehend directly. This could also be a potential problem for following works to follow.\\n\\nIn our new draft we\\u2019ve included a table with the exact values of the results of the in silico experiments in App. C.3 for reference by future practitioners.\", \"citation\": \"Jin, Wengong, Jeremy Wohlwend, Regina Barzilay, and Tommi Jaakkola. 2021. \\u201cIterative Refinement Graph Neural Network for Antibody Sequence-Structure Co-Design.\\u201d In International Conference of Learning Representations 2022.\"}",
"{\"title\": \"Comment\", \"comment\": \"Thank you for your thoughtful review and suggestions! We address each of your suggestions and questions in detail below. In addition to these points, our new draft has improved the aesthetic of many figures, but kept the displayed data the same.\\n\\n>The paper is well structured but somewhat hard to read, partly due to the many ideas that it tries to cover in the main text. I suggest moving some of the mathematical exposition into the SI and describe the intuition more concisely but clearly.\\n\\nThe most challenging section is the introduction of the twisted sampling. In the new draft we\\u2019ve therefore moved two paragraphs of the mathematical exposition in section 6.3 to the appendix and added a more intuitive explanation.\\n\\n>I\\u2019m not clear on the exchangeability claim, evolutionary processes have a clear arrow of time, with sequences ordered in a way that can be predicted, unless we are only looking at the final leaves of the tree, which in my understanding is not necessarily true for immune populations. I appreciate a clearer exposition here. \\n\\nIndeed, unlike protein family sequences, clonal family sequences are indeed not necessarily leaves of the tree. While this doesn\\u2019t affect exchangeability, it has the potential to affect the claim that the learned distribution has probability proportional to fitness by introducing bias and phylogenetic correlation. However phylogenetic correlations and biases are also present in protein datasets where the claim that the learned probability is proportional to fitness has been the basis for accurate mutation effect prediction. We hope the approximation is similarly accurate in our case. The evolutionary model used for CloneBO can in principle be improved by accounting for bias; as well, the presence of sequences which aren\\u2019t leaves in principle allows one to build models that can learn the direction of evolution, an exciting direction for future work.\\n\\nFormally, for protein modeling, the logic is that sequences in a protein family are (1) exchangeable and therefore come iid from some distribution that we can learn by training a machine learning model; next (2) it\\u2019s reasoned that, since these proteins are leaves of an evolutionary tree, this distribution has probability proportional to fitness (for example, Weinstein et al., 2022). The sequences we see from a clonal family are also exchangeable, so they must come iid from some distribution we can learn with a model. However, unlike protein family sequences, clonal family sequences are indeed not necessarily leaves of the tree; this impacts the logic of (2), the interpretation of the learned model.\\n\\nIf antibody sequences in clonal families evolved forever and had little phylogenetic correlations, then population genetics suggests that their distribution is exactly in proportion to fitness. In principle, the fact that sequences in clonal families may not be leaves is not necessarily a problem \\u2013 one can for example consider them samples from the same MCMC chain. In practice, the presence of sequences that are not leaves likely increases phylogenetic correlation, and bias from not evolving long enough. However, this bias is present in large scale phylogenetic correlation in protein families as well (Weinstein et al., 2022). Therefore we motivated CloneBO by suggesting (2) may also be a good approximation for clonal families.\\n\\nThe CloneBO evolutionary model can in principle be improved by accounting for the bias that comes from evolving for a short amount of time, and by using sequences which are not leaves of the evolutionary tree to learn the arrow of evolution. First, for each clone, it is often possible to infer the ancestral naive sequence $\\\\tilde X$; sequences in a clonal family are unlikely to evolve long enough to diverge substantially from this sequence so a potentially more accurate model accounting for this bias may by $\\\\log p(X|\\\\mathrm{clone})=F(X)-\\\\mathrm{distance}(X, \\\\tilde X)$. As well, once one infers $\\\\tilde X$, immunologists often use the number of mutations in the framework region as an estimate for how long a sequence has been evolving; this potentially allows a model to learn the direction of evolution within each clonal family \\u2013 something that is impossible to do in the protein case where all sequences are leaves.\"}",
"{\"summary\": \"This paper treats antibody design from pure sequence view, using clonal family to guide the model. The paper shows CloneBO can optimize antibody sequences better than former methods. Outstandingly, some experiments in vitro also support the effectiveness, which is often ignored in similar works.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The idea of applying martingale in antibody optimization is novel, and surprisingly fits the nature of evloving process of antibody.\\n2. Wet lab experiment is a highlight.\", \"weaknesses\": \"1. The experiment seems weak. Many influential works in recent years are not taken into comparison, such as[1][2]. They authors claim that structure based de novo design cannot make use of previous measurements and must have access to structure so they are not suitable for this task, which I highly suspect. Antibody is a specific protein type that highly reliable on its structure to perform. Even though stucture data are scarse, authors should demonstrate CloneBO\\u2019s supriority by showing that using massive sequence data can lead to higher performance only using sequence.\\n2. The representation is vague. Without any specific table, it\\u2019s hard to comprehend directly. This could also be a potential problem for following works to follow.\\n\\n[1]Kong X, Huang W, Liu Y. End-to-end full-atom antibody design[J]. arXiv preprint arXiv:2302.00203, 2023.\\n\\n[2]Lin H, Wu L, Yufei H, et al. GeoAB: Towards realistic antibody design and reliable affinity maturation. ICML2024.\", \"questions\": \"1. Does the optimization process in Fig. 3a converge? It seems that fitness is still rising in the end.\\nSee weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper introduces a method, termed CloneBO, for optimizing antibody sequences for stability and binding. Their approach uses an LLM to model the distribution of a collection of clonal populations of antibodies, drawing inspiration from the evolutionary trajectories of antibodies in the body to predict beneficial mutations. The authors introduce several ideas to improve sampling quality from their LLM, including sampling from a martingale posterior and constructing a twisted MC scheme that biases towards observed antibodies. Their approach is tested both on in silico benchmarks and in vitro experiments and shows outstanding performance in the design tasks they\\u2019ve attempted.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Context sharing and placement of the solution: The exposition is strong (on the antibody design problem), code is shared, well-organized and readable. The paper targets an important problem and is lucid about how to get to impact in that domain.\", \"Innovation: Multiple interesting and recent ideas have been introduced together for this problem domain (Simulating clonal evolution, Martingale posteriors, combining twisted Montecarlo and LLMs), and the results are convincing in silico and in vitro.\", \"Benchmarking: multiple relevant design approaches have been benchmarked, and the method performs well comparatively.\", \"In vitro validation: The authors conduct extensive validation, both in silico and in vitro. In vitro validation is rare among ML papers but absolutely crucial in evaluating whether the method has practical use. The authors have done ablation studies on the effects of their sampling scheme and show each individual component helps the performance of the optimizer.\"], \"weaknesses\": [\"The experimental data are not shared. This is not disqualifying for an ML paper in my view, but should be noted as a weakness.\", \"The paper is well structured but somewhat hard to read, partly due to the many ideas that it tries to cover in the main text. I suggest moving some of the mathematical exposition into the SI and describe the intuition more concisely but clearly.\"], \"questions\": [\"I\\u2019m not clear on the exchangeability claim, evolutionary processes have a clear arrow of time, with sequences ordered in a way that can be predicted, unless we are only looking at the final leaves of the tree, which in my understanding is not necessarily true for immune populations. I appreciate a clearer exposition here.\", \"I\\u2019m unsure why the authors have chosen to model the light chain and heavy-chains separately. A clear explanation would be helpful.\", \"Line 39: \\u201cup to thousands of previous iterations\\u201d this is somewhat misleading, not all iterations are the same as in many cases these are done in batches (\\u201cmeasurements\\u201d are always done in batches), and batch-iterations rarely exceed 10. In silico iterations can reach thousands but the description here is blurry and especially \\u201cmeasurements\\u201d is best reserved for real world queries rather than oracle queries.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The paper proposes a new way to design antibodies using Bayesian optimisation. In contrast to most closely related existing aproach (Lambo) the authors propose a different way to capture the prior antibody distribution. They train a PLM on clonal families, to capture how the imune system diversifies antibodies. They also introduce a twisted sequential Monte Carlo procedure to better incorporate previous lab measurements (sucessful designs). The in silico and in vitro results show that the proposed procedure improves upon Lambo in terms of antibody synthesizeability and stability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I find it quite a unique and nice idea to try to produce local diversification around the seed that matches diversification our body produces during imunization and it's implementation is going a bit of the beaten path with for example the martingale posterior. The experiments are quite thorough and convincing, with a strong baseline. The paper itself is well written and easy to follow.\", \"weaknesses\": \"This is not a big weakness, but I'd say that the CloneLM model makes more Fv mutations than one would expect. Also, in appendix the clonal familly does not fix the missing starting gap '-' in some cases, which is not ideal as it would always be a 'Q' for human Ab with the given subsequent amino acids.\\n\\nThe melting temperature and binding oracle experiment uses VHHs, which I would say is not ideal. While humanized VHHs do look kinda like human VH chain, they are still different, even in trivial ways (e.g. they often have much longer CDR H3). It's nice that the proposed method deals with this distribution shift, but it's not entierly fair to compare to the baselines, that were only developed for 'normal' Abs.\\n\\nIn the paper you write \\\"de novo design method DiffAb\\\", but DiffAb it's not de novo, it needs a co-crystal structure as a starting point.\\n\\nFigure 4 (b) second plot, only 1 point beats best point of Lambo, the average of Lambo also looks better so I'm not fully convinced that the proposed method produces more stable antibodies. Of course, the KD results are quite convincing.\", \"questions\": \"When training the ClonalPLM are the clonal families (their order) re-mixed (e.g. for Light chain as it uses many epochs) if not, how is the ordering chosen?\\n\\nWhat is the edit distance distribution for the in-vitro designs? In the method description (Section 6.4) you state \\\"iteratively optimize F(X) over the top substitution for up to 3 substitutions\\\", does this mean that designs can only have maximum edit distance of 3 in your experiments? If yes were the baselines constrained to the same edit distance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Comment 1\", \"comment\": \"Thank you for your thorough and thoughtful review! We address each of your suggestions and questions below. In addition to these points, we\\u2019ve made a number of aesthetic changes to the figures of the paper in the new draft.\\n\\n>This is not a big weakness, but I'd say that the CloneLM model makes more Fv mutations than one would expect. Also, in appendix the clonal familly does not fix the missing starting gap '-' in some cases, which is not ideal as it would always be a 'Q' for human Ab with the given subsequent amino acids.\\n\\nIndeed, CloneLM reflects the biases of the data it was trained on, OAS! For example, given the sequencing coverage biases in OAS, CloneLM generates sequences without fixing the gaps. We do not anticipate these substantially affect the performance of CloneBO and can potentially be addressed in future work.\\n\\t\\nThe increased Fv mutation rate may be due to (1) the increased rate of Fv mutations in the mature BCRs that CloneLM is trained on, or (2) it may be reverting mutations from germline that are present in the conditional $X_0$; neither of these necessarily represent a pathology. For (1), CloneLM is trained on annotated clonal families from FastBCR, which should exclude naive BCRs which have not undergone somatic hypermutation. Olsen et al., 2024 argue that reflecting the mutational spectrum of mature BCRs is favourable for drug design. For (2), CloneLM may recognize mutations from germline in $X_0$ and generate a clone with many sequences that exclude that germline mutation; in this case, a single \\u201creversion\\u201d may appear as a mutation in every sequence in the clone that is not $X_0$ in our Fig. 2A. Since mutations from germline can be passenger mutations or transient, it is not necessarily pathological that CloneLM may revert these mutations in some cases.\\n\\t\\nWhen given a clone with gaps at the beginning, CloneLM may recognize that other sequences in the clone may also have come from a short-read sequencing machine and not fill in the gaps. On the other hand, we have noticed that when given a full sequence during optimization, CloneLM rarely introduces large sequencing errors or pathologies in its generation; since we also restrict sequences we test in lab to a few mutations form $X_0$, it is not obvious that CloneLM reproducing limited sequencing coverage present in the data should affect the performance of CloneBO.\\n\\t\\nOn the other hand, should we want to remove sequencing errors from the training data, we could inpute the gaps in coverage with models such as Olsen et al. 2022 in the training data; then we could train CloneLM on this data with imputed gaps.\\n\\n>The melting temperature and binding oracle experiment uses VHHs, which I would say is not ideal. While humanized VHHs do look kinda like human VH chain, they are still different, even in trivial ways (e.g. they often have much longer CDR H3). It's nice that the proposed method deals with this distribution shift, but it's not entierly fair to compare to the baselines, that were only developed for 'normal' Abs.\\n\\nIndeed, while ideally we would have validated on optimizing human VHs, we validated on data from experiments on humanized VHHs since this is the data we had available. We note however that both CloneLM and LaMBO-Ab are trained on \\u201cnormal Abs\\u201d data from human OAS, so CloneBO does not have an unfair advantage from, for example, having seen Camelid VHHs.\\nIn the paper you write \\\"de novo design method DiffAb\\\", but DiffAb it's not de novo, it needs a co-crystal structure as a starting point.\\n\\t\\nIn our experiments we use DiffAb to iteratively optimize an antibody by suggesting mutations. Therefore, indeed, we don\\u2019t use it for de novo design and in the new draft we have removed 3 references to it as such.\\n\\n>Figure 4 (b) second plot, only 1 point beats best point of Lambo, the average of Lambo also looks better so I'm not fully convinced that the proposed method produces more stable antibodies. Of course, the KD results are quite convincing.\\n\\nIndeed, while we noticed the most dramatic improvement from the baseline in silico for stability rather than binding, the situation was reversed in vitro. One hypothesis for why this is the case is that our trained oracles didn\\u2019t represent the variance in the in vitro measurements. We tuned our noise parameter $\\\\sigma$ to generate realistic clones conditioned on the starting pool of the in silico data (note: this procedure is sound as it does not use any test data); we noticed the tuned value of $\\\\sigma$ was the same for both binding and stability starting pools and therefore reused this value for all experiments. We hypothesize that re-tuning this parameter on the real in vitro starting pool data may have resulted in better predictions. \\n\\n>When training the ClonalPLM are the clonal families (their order) re-mixed (e.g. for Light chain as it uses many epochs) if not, how is the ordering chosen?\\n\\nYes, the sequences in the clone are shuffled before being fed to CloneLM.\"}",
"{\"summary\": \"In this paper authors propose CloneBO - a procedure which aims at streamlining the antibody sequence optimisation. The approach relies on a language model which training is heavily inspired by the evolutionary mechanisms present in the immune system and allows for iterative guidance of generation by taking into the account previously measured samples. Authors validate their approach both in silico and in vitro demonstrating significant improvements over other methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper focuses on antibody sequence optimisation, an important problem of multi-objective nature that is encountered in every drug discovery project. Approaches that improve this drug pipeline stage have a potential of streamlining development of antibody-based therapies by shortening the development process and decreasing its costs.\\nThe main original contribution of this paper lies in jointly modelling the clonal families (groups of evolutionarily connected antibody sequences that are computationally inferred from large scale sequencing data). \\nAuthors validate their approach on several benchmarks and showcase potential of improving individual relevant traits of antibodies like binding affinity, humanness and thermal stability. Reported results significantly outperform state-of-the-art methods. Improvements are reported both in silico (through better predicted scores of trained oracles) and with in vitro experiments, which significantly strengthens the contribution.\", \"weaknesses\": \"Method requires initial, viable sequence i.e. the starting point from which the optimisation begins. This limits the applicability in some design scenarios where such sequence is not known and therefore hinders the impact of the approach compared to parallel lines of research that focus on e.g. structure based binder design and optimisation.\", \"questions\": \"Training of CloneLM relies on a data pre-processing step (grouping of sequences into clonal families) done with FastBCR. Since this is a critical initial step I wonder if the authors examined the effects of different preprocessing tools or hyperparameters on downstream performance or at least, given the large resource demands of LM training, the changes in distribution of processed data.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thank you for your rebuttal. I will keep my score.\"}",
"{\"title\": \"Comment\", \"comment\": \"Thank you for your thoughtful and thorough review! We address your points and questions below. In addition to these points, we\\u2019ve made a number of aesthetic changes to the figures of the paper in the new draft.\\n\\n> Method requires initial, viable sequence i.e. the starting point from which the optimisation begins. This limits the applicability in some design scenarios where such sequence is not known and therefore hinders the impact of the approach compared to parallel lines of research that focus on e.g. structure based binder design and optimisation.\\n\\nIndeed CloneBO requires a starting point $X_0$ for optimization. This can be contrasted to other methods which for example take in a structure of a bound antibody and suggest sequences de novo that may adopt that structure.\\n\\nThere have been several recent advances in de novo design, both computational or experimental. But it is almost always the case that these designs are not immediately suitable as drugs either because they have too little activity or are not stable in the human body; to build a drug, these designs must be optimized. We see de novo design methods, computational or experimental, as building $X_0$ and CloneBO as complementing these methods by optimizing their outputs.\\n\\n> Training of CloneLM relies on a data pre-processing step (grouping of sequences into clonal families) done with FastBCR. Since this is a critical initial step I wonder if the authors examined the effects of different preprocessing tools or hyperparameters on downstream performance or at least, given the large resource demands of LM training, the changes in distribution of processed data.\\n\\nIndeed CloneBO is trained on the outputs of FastBCR, which depend on the hyperparameters used for FastBCR. We used the default hyperparameters of FastBCR which were optimized for annotation accuracy on simulated data in their publication. We manually inspected select annotated clones to ensure they contained related sequences while different clones contained sufficiently different sequences. Given the computational cost of annotating all of OAS and training a model on this data, as well as the absence of obvious pathologies in the manually inspected clones, we decided to leave this exploration to future work.\"}",
"{\"title\": \"Comment 2\", \"comment\": \"> I\\u2019m unsure why the authors have chosen to model the light chain and heavy-chains separately. A clear explanation would be helpful.\\n\\nRather than build two models to separately generate heavy and light chains, we could have trained a model to (1) generate either heavy or light chains, or (2) generate full antibodies \\u2013 paired heavy and light chains. \\n\\nThere are models that take the approach (1), such as IgLM (Shuai et al., 2023). Since there is a large amount of both heavy and light data however, we didn\\u2019t expect that we would observe a large improvement by training a single model on both sets of data. We also had the resources to train two models in parallel that could each focus on learning the heavy and light chain datasets separately. We suspect IgLM took this approach for the convenience of having both modalities in one model a practitioner could download.\\n\\nFor (2), human antibodies are made up of pairs of heavy and light chains so it is most natural to model pairs. However, due to the limits of current sequencing technology, it is much easier to get all the heavy or light chain sequences in a patient\\u2019s repertoire than identify which sequences pair with which. This is manifest in the availability of the data, where in OAS, only roughly 0.1% of heavy chain sequences have a known paired light chain sequence. Learning from this very limited data while transferring the knowledge gained on the much larger unpaired data requires careful engineering such as the fine tuning approach recently employed by Kenlay et al., 2024. Indeed, this is an interesting direction for future work, especially as the amount of paired data grows.\\n\\n>Line 39: \\u201cup to thousands of previous iterations\\u201d this is somewhat misleading, not all iterations are the same as in many cases these are done in batches (\\u201cmeasurements\\u201d are always done in batches), and batch-iterations rarely exceed 10. In silico iterations can reach thousands but the description here is blurry and especially \\u201cmeasurements\\u201d is best reserved for real world queries rather than oracle queries.\\n\\nIndeed, in most cases each iteration involves measuring a large batch and the number of iterations is not in the thousands. In the new draft we\\u2019ve changed the sentence to \\u201cTo make these predictions, we can learn from up to thousands of measurements of sequences from many previous iterations\\u201d.\", \"citations\": \"Shuai, Richard W., Jeffrey A. Ruffolo, and Jeffrey J. Gray. 2023. \\u201cIgLM: Infilling Language Modeling for Antibody Sequence Design.\\u201d Cell Systems 14 (11): 979-989.e4.\\n\\nKenlay, Henry, Fr\\u00e9d\\u00e9ric A. Dreyer, Aleksandr Kovaltsuk, Dom Miketa, Douglas Pires, and Charlotte M. Deane. 2024. \\u201cLarge Scale Paired Antibody Language Models.\\u201d arXiv [q-Bio.BM]. arXiv. http://arxiv.org/abs/2403.17889.\\n\\nWeinstein, Eli N., Alan N. Amin, Jonathan Frazer, and Debora S. Marks. 2022. \\u201cNon-Identifiability and the Blessings of Misspecification in Models of Molecular Fitness and Phylogeny.\\u201d Advances in Neural Information Processing Systems, December.\"}"
]
} |
E3qIInyTgL | CC-VFed: Client Contribution Detects Byzantine Attacks in Vertical Federated Learning | [
"Kento Oonishi",
"Tsunato Nakai"
] | Vertical federated learning (VFL) is a type of federated learning where the collection of different features is shared among multiple clients, and it is attracting attention as a training method that takes into account the privacy and security of training data. On the other hand, in federated learning, there is a threat of Byzantine attacks, where some malicious clients disrupt the training of the model and output an trained model that does not exhibit the behavior that should be obtained. Thus far, numerous defense methods against Byzantine attacks on horizontal federated learning have been proposed, most of which focus on the similarity of the models generated across clients having the similar features and mitigate the attacks by excluding outliers. However, in VFL, the feature sets assigned by each client are inherently different, making similar methods inapplicable, and there is little existing research in this area. In light of the above, this paper organizes and classifies feasible Byzantine attacks and proposes a new defense method CC-VFed against these attack methods. Firstly, this paper organizes and classifies attack methods that contaminate training data, demonstrating that sign-flipping attacks pose a threat to VFL. Subsequently, in order to capture the differences in client features, this paper proposes a method for detecting and neutralizing malicious clients based on their contribution to output labels, demonstrating that it is indeed possible to defend Byzantine attacks in VFL. | [
"Vertical Federated Learning",
"Byzantine Attacks"
] | Reject | https://openreview.net/pdf?id=E3qIInyTgL | https://openreview.net/forum?id=E3qIInyTgL | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zC8SLG7DZO",
"z0mAsIrRKK",
"yydnT3RsKY",
"xeFTOZr6Bf",
"spq2HX2q8V",
"sDghxBq5Ki",
"rE8XJyjokD",
"gmOv36dLaA",
"bxp4ldbHnV",
"aYpoTbrEs0",
"Y0K95we3H3",
"R4QRsxtjuR",
"OgAzSD3U9k",
"Kpnwvy3Jlf",
"JG5zq73SQV",
"Izi9rwqkuB",
"FYbamaXtDd",
"FUX247yEEG",
"CynAiOz3XD",
"CXRcmjwZG0",
"BCZDHUXbED",
"6pVdppO72d",
"0s7wGN96xD",
"01RAKvYW2U"
],
"note_type": [
"official_comment",
"decision",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732351539108,
1737523989607,
1730208969963,
1732086379718,
1732088577180,
1730275281661,
1732088222785,
1732087585897,
1732088444005,
1732087187710,
1732087392572,
1730506864156,
1732086969380,
1734851358076,
1732666733731,
1732658737517,
1732090067650,
1733099343498,
1732658640864,
1732087485758,
1732088008301,
1732608826604,
1732854220419,
1732087891002
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission9538/Reviewer_65wx"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission9538/Reviewer_65wx"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Reviewer_VPkd"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Reviewer_t2tb"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Area_Chair_NVJP"
],
[
"ICLR.cc/2025/Conference/Submission9538/Reviewer_t2tb"
],
[
"ICLR.cc/2025/Conference/Submission9538/Area_Chair_NVJP"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Reviewer_65wx"
],
[
"ICLR.cc/2025/Conference/Submission9538/Area_Chair_NVJP"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
],
[
"ICLR.cc/2025/Conference/Submission9538/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Thanks for your reply.\\n\\nFrom your reply, I guess the selection of the thresholds for malicious clients detection is vital for balancing between Top-1 accuracy and detection rate which is very hard to decide when no information about the attacker is known. This arises my concern about the real-world applicability of your defense mechanism. Could you provide any threshold selection cirteria to effectively select a proper threshold that fits for the potential attack?\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"In this paper, the author proposed CC-VFed to detect and neutralize malicious clients in the setting of vertical federated learning. In general the malicious clients are recognized based on how much it contributes to mislead the VFL model into making wrong predictions. Experiments are done on 2 datasets with a limited setting in which one and only one party is malicious. The effectiveness of the proposed method is verified for defending against sing-flipping byzantine attacks for VFL.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"To verify the effectiveness of the proposed defense method, a strong byzantine attack for VFL, sign-flipping attack, is first proposed and demonstrated to be quite effective in harming the performance of a VFL system. Also, studies on what kind of model architectures is more robust to attacks and helps in boosting the defense capability is also conducted which is quite interesting.\", \"weaknesses\": \"1. Why should $c$ be close to one? Since this argument lacks analysis or supporting work reference, it sounds really weird.\\n2. Experiments are not adequate, only 2 datasets are used. Besides, the setting that only one party is malicious makes the experiment and the detection of malicious party relatively easy, which could not demonstrate the effectiveness of the proposed malicious party recognition method. To what extent can this defense safeguard VFL from Byzantine attack when the number of malicious parties is unknown is not demonstrated which is a very important setting that close to real world setting.\\n3. I doubt the effectiveness and design of the proposed method, since when the defense is applied to defend against random/permutation attacks, the Top-1 accuracy often drop compared to without defense setting. So, the defense acks like a \\u201cstronger attack\\u201d which is not acceptable.\", \"questions\": \"If the authors could explain clearly why the problem I mentioned in Weakness 3 occurs and why the proposed defense is not \\\"a stronger attack\\\" itself, I will consider raising my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer t2tb (Weaknesses 1)\", \"comment\": \"Thanks for your valuable feedback.\\n\\n>The investigated Byzantine attack methods, particularly sign-flipping [1], have been extensively studied in previous research. The authors could have enhanced the scope by incorporating other relevant adversarial attacks in VFL [2,3].\\n\\nYou have raised an important point; however, we believe that these studies [1,2,3] would be outside the scope of our paper because these studies target the inference phase rather than the training phase. This paper targets Byzantine attacks on the training phase. \\nAttacks that aim to mislead the inference results during the inference phase are indeed highly significant. However, in such attacks, the model itself has been correctly trained, so there is no need to incur substantial costs for retraining the model. In contrast, when the model itself collapses, as discussed in this paper, retraining requires significant costs, making it a more threatening attack method. Furthermore, countermeasures during the training phase are more challenging compared to those during the inference phase, as they cannot leverage the training of defense mechanisms during the training phase, as was done in the countermeasures for the inference phase [1]. Therefore, while the ultimate goal of the attacks\\u2014to mislead the detection results\\u2014remains the same, the processes involved are entirely different. In the revised version, we have clarified the above discussions.\\n\\n>Additionally, the gradient contribution-based defense mechanism bears significant similarities to existing approaches [4].\\n\\nThis is a valid assessment of similarities to existing approaches [4]; however, we believe that the contributions addressed in [4] and those in this paper are different. The contribution in [4] calculates the extent to which each client's input has advanced the overall model training. In contrast, the contribution in this paper calculates the amount of information each client has provided to the output labels. While it is true that both approaches use gradients to compute some form of contribution, the targets of these computations are different. Specifically, the method proposed in this paper emphasizes the importance of evaluating changes in the output labels. The progression of model training, as indicated in literature [4], is not considered sufficient to evaluate whether an attack has occurred.\"}",
"{\"title\": \"Response to Reviewer 65wx (Weaknesses 3)\", \"comment\": \">I doubt the effectiveness and design of the proposed method, since when the defense is applied to defend against random/permutation attacks, the Top-1 accuracy often drop compared to without defense setting. So, the defense acks like a \\u201cstronger attack\\u201d which is not acceptable.\\n\\nWe agree with your assessment. In this study, while defending against sign-flipping attacks resulted in an increase in Top-1 accuracy, it is indeed possible that the Top-1 accuracy for random/permutation attacks may decrease. This can be attributed to the fact that the detection of malicious clients may occasionally be inaccurate, leading to the omission of legitimate information that is crucial for training. Specifically, in the case of random/permutation attacks, the attacks often fail to the extent that, as shown in Table 1, a more accurate model may be generated compared to the scenario where no attack is conducted. During the training process of these models, the proposed defense method in this study may inadvertently omit additional legitimate information, potentially causing a decrease in Top-1 accuracy.\\nThe defense method proposed in this study prioritizes algorithmic efficiency and practicality compared to the existing research by Yuan et al. However, given that the determination is made based on numerical thresholds, it is challenging to completely eliminate judgment errors, and such errors can introduce noise, slightly reducing accuracy. From this perspective, the defense method proposed in this study is challenging yet noteworthy in that it enhances accuracy through the defense mechanism while minimizing the accuracy degradation caused by judgment errors. We consider it acceptable that the Top-1 accuracy dramatically recovers in the presence of highly effective attacks, even if the detection rate decreases by a few percentage points.\"}",
"{\"summary\": \"This article organizes and classifies Byzantine attacks, and proposes a new defense method CCVFed for these attack methods. Firstly, it has been proven that flipping symbol attacks pose a threat to VFL. Subsequently, in order to capture the differences in client features, this paper proposes a method for detecting and neutralizing malicious clients based on their contribution to output labels, proving that it is indeed possible to defend against Byzantine attacks in VFL.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The article proposes a new defense method CC-VFed, which utilizes the contribution of each client to the output label to counter Byzantine attacks.\\n2. The article has verified through extensive experiments that flipping symbol attacks are threatening.\\n3. In the background section, the author delves into the research question layer by layer. The outlier detection methods used in HFL scenarios are not sufficient for VFL, and there are deficiencies in the existing methods in VFL. Thus presenting the challenge of this article.\\n4. This study used real-world datasets such as BCW and CIFAR10 to evaluate the effectiveness of defense methods. And the article studied the differences in using this method on different datasets.\", \"weaknesses\": \"1. Although the method is easy to understand, the author can better discuss the novelty of the proposed mechanism.\\n 2. The author should conduct a more detailed analysis of the method, identify its limitations, and make improvements.\\n 3. What are the unique advantages of our method compared to existing methods in other VFLs?\\n 4. The method lacks some key descriptions, making it difficult for people to understand\", \"questions\": \"1. Novelty: The novelty of the proposed method is unclear from the paper. For example, the core of the CC-VFed method mentioned in the contribution lies in the calculation of contribution degree. The author's description of two targeted improvements (Selvaraju et al., 2017) did not clearly state the effectiveness and necessity of the author's improvements. I think this needs to be strengthened. The method proposed by the author does have some practicality, but the method itself is too simple and cannot solve some key problems. For example, who will determine whether the output label matches the label of the training data? Did the author not explain this crucial issue in the methodology? If the client is to perform this critical operation, how can we prevent the client from doing evil and disrupting the correctness of the match? On the contrary, if the server executes it, then the privacy information of the data label is known by the server. How to ensure the user privacy of federated learning? These are the issues that the author needs to focus on. But I greatly appreciate the author's research on the challenges presented in this article, such as determining the most severe challenge through detailed experiments - the sign-\\ufb02ipping attacks.\\n 2. Related work: What are the unique advantages of this method compared to existing methods in other VFLs? The author lacks a description and comparison of the shortcomings in other VFL methods, and does not highlight the unique advantages of the method proposed in this article, which can solve problems that cannot be solved by other methods. I think the author should add some state-of-the-art comparative literature for detailed comparison.\\n 3. Method: In addition to the security issues of the proposed method, some steps in the method are also confusing. For example, who will determine if the output label matches the label of the training data? The calculation formula for contribution degree lacks more description. The four defense methods proposed are actually four different situations within one defense method, which are not the four defense methods claimed by the author.\\n 4. Evaluation: Although the author conducted extensive experiments from various perspectives, their method did not compare with the most advanced existing methods. In addition, the author did not provide clear explanations in the experimental setup. For example, what is the experimental environment? How many experiments did the author conduct? Is the difference in results statistically significant?\\n 5. Organization and Writing: It is recommended that the author add some graphics to describe the proposed method, so that readers can clearly understand: which entities are there? What operations did each entity perform? What are the unique meanings behind each operation? In addition, the key standard of \\\"Top-1 accuracy\\\" has not been explained.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer 65wx (Weaknesses 1)\", \"comment\": \"Thanks for your valuable feedback.\\n\\n>Why should $c$ be close to one? Since this argument lacks analysis or supporting work reference, it sounds really weird.\\n\\nWe agree that this argument lacks analysis or supporting work reference. The discussion regarding the value of $c$ is based on the input validation defined in this paper. During input validation, the following conditions are assumed for excluding inputs:\\n- When the input values do not meet the specified upper and lower bounds.\\n- When the input values significantly deviate from the expected distribution.\\n\\nIf $c$ is significantly increased or decreased, the inputs will be excluded for the following reasons:\\n- If $c$ is significantly large: The input values will not meet the specified upper and lower bounds.\\n- If $c$ is significantly small: The values constituting each element will lack variance, making it easy to detect deviations from the original distribution.\\n\\nThus, the inputs will be excluded based on these criteria. If the value of $c$ is close to 1, it is expected that the input will not be excluded during input validation, at least in terms of the distribution of values remaining unchanged. However, the method for setting the threshold for this input validation and the discussion regarding the permissible values of $c$ remain as topics for future research.\"}",
"{\"title\": \"Response to Reviewer VPkd (Questions 3)\", \"comment\": \">In addition to the security issues of the proposed method, some steps in the method are also confusing. For example, who will determine if the output label matches the label of the training data? The calculation formula for contribution degree lacks more description.\\n\\nWe agree that some steps in the method are also confusing. The central server is responsible for making determinations regarding the output labels. Additionally, in the revised version, we have provided a more detailed description of the method for calculating contributions.\\n\\n>The four defense methods proposed are actually four different situations within one defense method, which are not the four defense methods claimed by the author.\\n\\nWe agree with your assessment. Indeed, the proposed method in this study primarily utilizes contributions to detect malicious clients as a strategic approach. However, in practice, we propose four distinct methods for detecting malicious clients, recognizing that the effectiveness of each method may vary depending on the dataset. Therefore, this paper details these four methods.\"}",
"{\"title\": \"Response to Reviewer 65wx (Weaknesses 2)\", \"comment\": \">Experiments are not adequate, only 2 datasets are used.\\n\\nYou have raised an important question. In this paper, we conduct experiments using the CIFAR10 image dataset and the BCW numerical dataset to verify the effectiveness of our approach on different types of data. Furthermore, the paper by Murata et al.[1], which has been accepted for ICLR 2024 and discusses defenses against Byzantine attacks in horizontal federated learning, also validates their defense methods using two image datasets, CIFAR10 and MNIST. Therefore, we consider the datasets used in our study to be sufficient.\\n\\n[1] Tomoya Murata, Kenta Niwa, Takumi Fukami, and Iifan Tyou. Simple minimax optimal byzantine robust algorithm for nonconvex objectives with uniform gradient heterogeneity. In The Twelfth International Conference on Learning Representations, 2024.\\n\\n>Besides, the setting that only one party is malicious makes the experiment and the detection of malicious party relatively easy, which could not demonstrate the effectiveness of the proposed malicious party recognition method. To what extent can this defense safeguard VFL from Byzantine attack when the number of malicious parties is unknown is not demonstrated which is a very important setting that close to real world setting.\\n\\nYou have raised an important point. In the experiments conducted in this study, it is true that we limited the number of malicious clients to at most one. While it is indeed correct that the actual number of malicious clients is unknown and an algorithm should be constructed to accommodate this, it remains challenging to detect malicious clients with complete accuracy based on numerical calculations. Therefore, as described in Section 4.1.2, we assume that when the number of clients increases, the algorithm will detect up to half of the malicious clients. Given that we have successfully mitigated the impact of a single malicious client, it is anticipated that similar effectiveness can be achieved even if the number of malicious clients increases.\"}",
"{\"title\": \"Response to Reviewer t2tb (Weaknesses 3)\", \"comment\": \">The experimental evaluation could be more comprehensive. The current validation relies on two relatively simple datasets (BCW and CIFAR-10), which may not fully demonstrate the method's robustness across diverse scenarios. The absence of ablation studies limits our understanding of each component's contribution to the overall system performance.\\n\\nYou have raised an important question. In this paper, we conduct experiments using the CIFAR10 image dataset and the BCW numerical dataset to verify the effectiveness of our approach on different types of data. Furthermore, the paper by Murata et al.[5], which has been accepted for ICLR 2024 and discusses defenses against Byzantine attacks in horizontal federated learning, also validates their defense methods using two image datasets, CIFAR10 and MNIST. Therefore, we consider the datasets used in our study to be sufficient.\\n\\n[5] Tomoya Murata, Kenta Niwa, Takumi Fukami, and Iifan Tyou. Simple minimax optimal byzantine robust algorithm for nonconvex objectives with uniform gradient heterogeneity. In The Twelfth International Conference on Learning Representations, 2024.\"}",
"{\"title\": \"Response to Reviewer VPkd (Questions 1)\", \"comment\": \"Thanks for your valuable feedback.\\n\\n>The novelty of the proposed method is unclear from the paper. For example, the core of the CC-VFed method mentioned in the contribution lies in the calculation of contribution degree. The author's description of two targeted improvements (Selvaraju et al., 2017) did not clearly state the effectiveness and necessity of the author's improvements. I think this needs to be strengthened.\\n\\nThank you for your suggestion. In this study, we investigate the feasibility of Byzantine attacks and propose new defense methods CC-VFed against these attack methods. In particular, the novelty of this paper lies in the proposal of a new defense method against Byzantine attacks during the training phase. While Yuan et al. had previously proposed a defense method against Byzantine attacks during training, their method is only applicable to simple models, because in complex models, solving the dual problem utilized in their method may become challenging. In this study, we propose CC-VFed as a more practical defense method against Byzantine attacks during the training phase, which is applicable to diverse models and datasets. In CC-VFed, if the inference results at the central server do not match the training data, it is interpreted that a Byzantine attack has been conducted using the input data. Specifically, clients that significantly contribute to incorrect inference results are altering the inference outcomes, and thus, can be regarded as malicious clients. When detecting malicious clients, only simple computations similar to Grad-CAM are required, making this approach more practical compared to the method proposed by Yuan et al. In the revised version, we have clarified the novelty and effectiveness of our approach through this discussion.\\n\\n>The method proposed by the author does have some practicality, but the method itself is too simple and cannot solve some key problems. For example, who will determine whether the output label matches the label of the training data? Did the author not explain this crucial issue in the methodology? If the client is to perform this critical operation, how can we prevent the client from doing evil and disrupting the correctness of the match? On the contrary, if the server executes it, then the privacy information of the data label is known by the server. How to ensure the user privacy of federated learning? These are the issues that the author needs to focus on. But I greatly appreciate the author's research on the challenges presented in this article, such as determining the most severe challenge through detailed experiments - the sign-\\ufb02ipping attacks.\\n\\nYou have raised an important question. Generally, in VFL, the central server aims to output labels for a given target using data from external entities. Therefore, during the training phase, the central server retains the labels of the training data. Based on the above considerations, the comparison between the output labels and the labels of the training data is conducted by the central server.\"}",
"{\"summary\": \"This paper compares three Byzantine attack methods in Vertical Federated Learning, demonstrating that the sign-flipping attack exhibits the highest effectiveness. Based on the attack, they propose a new defense method CC-VFed to identify the malicious client. The proposed method's efficacy is evaluated through experiments on the BCW and CIFAR-10 datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Developing Byzantine attacks in the VFL context is an important topic, and has received limited attention so far.\", \"weaknesses\": \"1. Limited novelty. The investigated Byzantine attack methods, particularly sign-flipping [1], have been extensively studied in previous research. The authors could have enhanced the scope by incorporating other relevant adversarial attacks in VFL [2,3]. Additionally, the gradient contribution-based defense mechanism bears significant similarities to existing approaches [4].\\n\\n2. The proposed defense methodology raises concerns regarding VFL protocol compliance and privacy preservation. Specifically, Step 1 of the defense mechanism requires clients to share training data $x_i$ with the central server, which compromises data privacy. Furthermore, the aggregation process of Grad-cam values needs clarification, particularly regarding the distinction between multiple nodes versus single node scenarios. The defense computation would benefit from more rigorous mathematical formulation.\\n\\n3. The experimental evaluation could be more comprehensive. The current validation relies on two relatively simple datasets (BCW and CIFAR-10), which may not fully demonstrate the method's robustness across diverse scenarios. The absence of ablation studies limits our understanding of each component's contribution to the overall system performance.\\n\\n[1] Liu, Jing, et al. \\\"CoPur: certifiably robust collaborative inference via feature purification.\\\" Advances in Neural Information Processing Systems 35 (2022): 26645-26657.\\n[2] Pang, Qi, et al. \\\"ADI: Adversarial Dominating Inputs in Vertical Federated Learning Systems.\\\" arXiv preprint arXiv:2201.02775 (2022).\\n[3] Duanyi, Y. A. O., et al. \\\"Constructing Adversarial Examples for Vertical Federated Learning: Optimal Client Corruption through Multi-Armed Bandit.\\\" The Twelfth International Conference on Learning Representations. 2023.\\n[4] J. Wang, L. Zhang, A. Li, X. You, and H. Cheng, \\u201cEfficient participant contribution evaluation for horizontal and vertical federated learning,\\u201d in 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 2022, pp. 911\\u2013923.\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer t2tb (Weaknesses 2)\", \"comment\": \">The proposed defense methodology raises concerns regarding VFL protocol compliance and privacy preservation. Specifically, Step 1 of the defense mechanism requires clients to share training data $x_{i}$ with the central server, which compromises data privacy.\\n\\nWe agree that the current version implies that client data is sent to the central server. This statement contains a typographical error; in reality, what is sent to the central server is not $x_{i}$ but $z_{i,j}$. In the revised version, this part has been corrected accordingly.\\n\\n>Furthermore, the aggregation process of Grad-cam values needs clarification, particularly regarding the distinction between multiple nodes versus single node scenarios. The defense computation would benefit from more rigorous mathematical formulation.\\n\\nWe agree that the aggregation process of Grad-cam values needs clarification. Section 4.1.1 is structured to first present the formula for calculating each client's contribution, followed by a detailed explanation of the computation. Specifically, to calculate the contribution of a client composed of multiple nodes, the contributions of individual nodes are summed. The contribution of each node $k$ is calculated as the product of the following:\\n- The input value $z_{i,j,k}$ to the server\\n- The gradient of $z_{i,j,k}$ with respect to the output label\\n\\nTherefore, in the paper, this is expressed as the dot product of the following:\\n- The input vector $z_{i,j}$ to the server\\n- The gradient of $z_{i,j}$ with respect to the output label\\n\\nAs stated initially, the calculation results are presented at the beginning of Section 4.1.1. In the revised version, we have made it clearer that these values represent the calculation results.\"}",
"{\"metareview\": \"Strength: the paper presented a defense method against Byzantine attack in VFL.\", \"weakness\": \"1. All reviewers agreed that the paper has limited novelty. Reviewer t2tb pointed out several prior research work that was not adequately discussed and this was also stated by other two reviewers.\\n2. Experiments were not clearly setup and not very convincing.\", \"additional_comments_on_reviewer_discussion\": \"Two reviewers with lower scores were actively in participating the discussions, however, neither was convinced by the authors's rebuttal. The reviewer with highest score 6 didn't participate, but the support of of this paper from this reviewer is not strong.\"}",
"{\"comment\": \"Thank you for your thorough revisions and clarifications.\\n\\nAfter careful consideration, I still have some concerns regarding the incremental nature of the work, since most pages are discussing the previous sign-flip attacks. I believe the core technical contribution could be strengthened, particularly in the algorithmic framework. The current approach, which primarily relies on gradients and labels, would benefit from a more rigorous theoretical analysis similar to that presented in your mentioned work [1]. Additionally, given the current theoretical framework, expanding the empirical validation would significantly strengthen the paper's contributions. While I appreciate authors' efforts in the revision, I maintain my previous assessment, as these aspects need further attention to fully demonstrate the paper's novelty and contribution to the field. \\n\\n[1] Tomoya Murata, Kenta Niwa, Takumi Fukami, Iifan Tyou, \\\"Simple Minimax Optimal Byzantine Robust Algorithm for Nonconvex Objectives with Uniform Gradient Heterogeneity\\\"\"}",
"{\"comment\": \"Dear reviewer VPkd,\\n\\nCould you please respond to authors' rebuttal and see if you would like to update your review? Thanks very much!\\n\\nAC\"}",
"{\"title\": \"Update Manuscript\", \"comment\": [\"Thank you for your meaningful comments. Based on your feedback, we have made the following revisions:\", \"Clarified that this study targets the training phase for attacks.\", \"Explicitly highlighted the novelty of this research.\", \"Clearly compared our work with existing studies, particularly the work by Yuan et al.\", \"Added details regarding the experimental conditions.\", \"Corrected any typographical errors.\", \"Included illustrative diagrams of the defense mechanism in the Appendix A.\"]}",
"{\"comment\": \"As the hyper-parameter decision method is not well established and experimental results do reveals the importance of a proper hyper-parameter selection (if not properly selected, the defense performs like \\\"a stronger attck\\\" as I previously mentioned in my above comments), I decide to keep my score. Besides, I agree with reviewer t2tb that the novelty is limited.\"}",
"{\"comment\": \"Dear reviewer t2tb,\\n\\nCould you please respond to authors' rebuttal and see if you would like to update your review? Thanks very much!\\n\\nAC\"}",
"{\"title\": \"Response to Reviewer VPkd (Questions 2)\", \"comment\": \">What are the unique advantages of this method compared to existing methods in other VFLs? The author lacks a description and comparison of the shortcomings in other VFL methods, and does not highlight the unique advantages of the method proposed in this article, which can solve problems that cannot be solved by other methods. I think the author should add some state-of-the-art comparative literature for detailed comparison.\\n\\nYou have raised an important point. A distinctive feature of this study is its resilience to Byzantine attacks during the training phase. To the best of our knowledge, there are only two existing studies that discuss methods to defend against Byzantine attacks by clients during training: the methods proposed by Yuan et al. and Xu et al. Specifically, in vertical federated learning, the features of each client differ significantly compared to horizontal federated learning, making it difficult to detect Byzantine attacks during the training phase. This presents a particularly challenging problem. The method by Xu et al. involves communication between clients, which falls outside the scope of this study. Therefore, in this paper, we have focused on and discussed the method by Yuan et al. as a comparative benchmark. Specifically, we propose CC-VFed as a more practical defense method against Byzantine attacks during the training phase, which is applicable to diverse models and datasets. In the revised version, we have added explanations to clearly elucidate the advantages of CC-VFed.\"}",
"{\"title\": \"Response to Reviewer VPkd (Questions 5)\", \"comment\": \">It is recommended that the author add some graphics to describe the proposed method, so that readers can clearly understand: which entities are there? What operations did each entity perform? What are the unique meanings behind each operation?\\n\\nThank you for your suggestion. In the revised version, we have included figures in the Appendix A.\\n\\n>In addition, the key standard of \\\"Top-1 accuracy\\\" has not been explained.\\n\\nWe agree with your assessment. In the revised version, we have added an explanation of Top-1 Accuracy to Section 3.3.\"}",
"{\"title\": \"Response to Reviewer 65wx\", \"comment\": \"This is an interesting perspective. In this algorithm, two thresholds are utilized as internal parameters. Specifically, these are the threshold related to contributions, which is set to determine whether a client is malicious (as discussed in Section 4.1.2, whether to set $t=0$), and the pre-set number of malicious clients.\\n\\nRegarding the former threshold, which determines whether a client is malicious based on their contribution, experimental results suggest that it can be set based on the task, independent of the attack. Since the task is defined by the central server, setting a task-specific threshold (e.g., not setting $t$ for image tasks or setting $t=0$ for numerical tasks) can establish a training method capable of defending against general Byzantine attacks.\\n\\nAs for the latter threshold concerning the number of malicious clients, it cannot be known in advance. Therefore, as discussed in Section 4.1.2 of this paper, it is set provisionally.\"}",
"{\"title\": \"Response to Reviewer t2tb\", \"comment\": \"Thank you for your comments.\\n\\n>After careful consideration, I still have some concerns regarding the incremental nature of the work, since most pages are discussing the previous sign-flip attacks. I believe the core technical contribution could be strengthened, particularly in the algorithmic framework. \\n\\nYou have raised an important point; however, the discussion on the sign-flipping attack in this context aims to evaluate the effectiveness of the attack under newly imposed constraints that were not previously considered for the attacker. In this study, we have clarified that, unlike traditional scenarios, the attacker cannot freely alter the gradients when a simplified defense mechanism, different from the one proposed in this paper, is introduced. This clarification leads to a new consideration of attacks where the attacker can only manipulate the input. Therefore, in order to delineate the capabilities of the attacker, the discussion and performance evaluation of the sign-flipping attack are crucial.\\n\\n>The current approach, which primarily relies on gradients and labels, would benefit from a more rigorous theoretical analysis similar to that presented in your mentioned work [1]. Additionally, given the current theoretical framework, expanding the empirical validation would significantly strengthen the paper's contributions. While I appreciate authors' efforts in the revision, I maintain my previous assessment, as these aspects need further attention to fully demonstrate the paper's novelty and contribution to the field.\\n[1] Tomoya Murata, Kenta Niwa, Takumi Fukami, Iifan Tyou, \\\"Simple Minimax Optimal Byzantine Robust Algorithm for Nonconvex Objectives with Uniform Gradient Heterogeneity\\\"\\n\\nThank you for providing these insights. Indeed, it would be ideal to conduct a theoretical analysis of the algorithm proposed in this paper using a similar approach to [1]. In [1], the analysis of HFL leverages the high similarity of models transmitted from each client. However, in VFL, the similarity of inputs from each client is low, and the evaluation of malicious clients is based on the continuously evolving model itself due to the learning process. Therefore, conducting a similar analysis as in [1] is considered challenging. In particular, when performing the analysis, it is crucial to appropriately define assumptions regarding the attacker and the generated model. The organization of the attacker's assumptions conducted in this study is expected to significantly aid in setting these assumptions in future research.\"}",
"{\"title\": \"Response to Reviewer VPkd (Questions 4)\", \"comment\": \">Although the author conducted extensive experiments from various perspectives, their method did not compare with the most advanced existing methods.\\n\\nYou have raised an important point; however, we believe that there are no direct comparators for CC-VFed. Firstly, while defense methods against horizontal federated learning have been proposed, these methods are not directly applicable to VFL. Furthermore, the method proposed by Yuan et al., which is a defense method for VFL, cannot be directly applied to the datasets and models used in our experiments, making comparisons difficult. Specifically, preventing Byzantine attacks during training using practical methods is particularly challenging and has not been addressed in existing research. Therefore, we consider that there are no direct comparators for CC-VFed.\\n\\n>In addition, the author did not provide clear explanations in the experimental setup. For example, what is the experimental environment?\\n\\nWe use Ubuntu 20.04, 32GB memory, two GPUs (NVIDIA RTX A5000), using Cuda 11.6 and PyTorch 1.13.1 for Cuda 11.6.\\n\\n>How many experiments did the author conduct? Is the difference in results statistically significant?\\n\\nYou have raised an important question. In this study, we conducted 440 experimental patterns. Initially, for the case of 2 clients, we performed 180 experimental patterns. Specifically, as shown in Table 1 of the paper, there are 9 attack patterns, including the case where no attack is conducted. For each of these patterns, we experimented with 5 defense methods (including the case where no defense is applied), 2 datasets, and 2 activation functions, resulting in a total of 9 * 5 * 2 * 2 = 180 experimental patterns. Similarly, for the case of 3 clients, as shown in Table 5 of the paper, there are 13 attack patterns, including the case where no attack is conducted, leading to 260 experimental patterns. Summing these, we conducted a total of 440 experimental patterns. Notably, in cases where the defense is successful, the defense method succeeds against all attack patterns, thereby demonstrating the effectiveness of the defense.\"}"
]
} |
E3PgLQzPob | CSGO: Content-Style Composition in Text-to-Image Generation | [
"Peng Xing",
"Haofan Wang",
"Yanpeng Sun",
"wangqixun",
"Baixu",
"Hao Ai",
"Jen-Yuan Huang",
"Zechao Li"
] | The diffusion model has shown exceptional capabilities in controlled image generation, which has further fueled interest in image style transfer. Existing works mainly focus on training free-based methods (e.g., image inversion) due to the scarcity of specific data. In this study, we present a data construction pipeline for content-style-stylized image triplets that generates and automatically cleanses stylized triplets. Based on this pipeline, we construct a dataset IMAGStyle, the first large-scale style transfer dataset containing 210k image triplets, available for the community to explore and research.Equipped with IMAGStyle, we propose a simple yet effective framework CSGO, a style transfer model based on end-to-end training, which explicitly decouples content and style features employing independent feature injection. Our CSGO implements image-driven style transfer, text-driven stylized synthesis, and text editing-driven stylized synthesis in the same model.
We conduct extensive experiments on CSGO to validate the effectiveness of synthetic stylized data for style control. Meanwhile, ablation experiments show the effectiveness of CSGO. | [
"image generation",
"style transfer",
"stylized synthesis"
] | Reject | https://openreview.net/pdf?id=E3PgLQzPob | https://openreview.net/forum?id=E3PgLQzPob | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yfD5MzSO4l",
"yPSrwJfIfn",
"vvF1rnXHvP",
"uyghjV36M4",
"tVrT2NGQhf",
"s0oZAzDxfW",
"peVBp3ltKs",
"pNddO9fLuP",
"nm6NVmvO3a",
"n8Vm0b3gut",
"jsZ4zvadND",
"h1VqZdt67m",
"fsBZLMJbLV",
"dNO4dayrFw",
"badfEkpe0h",
"axdSPWAsc6",
"aebUeEzDed",
"ZYtsimYvNN",
"Ta3rVIFfiL",
"TZcpY3mEYD",
"SpsLzZgdXa",
"SjfDhWpDzg",
"Ocosx5XYuE",
"KD0V1V45XJ",
"IoLh6KM7g6",
"H7SErHqkye",
"Cmxt3YBKhh",
"C7XD4gUDA7",
"BmNLS75gmH",
"AvaUacjD1M",
"7r5G3N1SY3",
"7qDhI5taBq",
"7ZIUWoKshU",
"6my64cUrNl",
"2JyVEXZFfJ"
],
"note_type": [
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1732262031157,
1732360648234,
1737523388714,
1732687509731,
1732171719834,
1732765984579,
1730474573475,
1732360656306,
1732171737957,
1732590814899,
1732171471552,
1732240848986,
1732171346942,
1732688500260,
1730695687106,
1732360604030,
1732171572719,
1732585313938,
1730201886843,
1732360646651,
1732690168035,
1732765828356,
1732711325770,
1732685937046,
1734611861483,
1730632042654,
1732171789190,
1732765914919,
1732285547062,
1732765864220,
1732360653229,
1732171389581,
1732171439641,
1732171552447,
1730728799683
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission283/Reviewer_qQFs"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Reviewer_y9qW"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Reviewer_qQFs"
],
[
"ICLR.cc/2025/Conference/Submission283/Reviewer_Vjbz"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Reviewer_y9qW"
],
[
"ICLR.cc/2025/Conference/Submission283/Reviewer_qQFs"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Reviewer_qQFs"
],
[
"ICLR.cc/2025/Conference/Submission283/Area_Chair_npo9"
],
[
"ICLR.cc/2025/Conference/Submission283/Reviewer_zBGr"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Authors"
],
[
"ICLR.cc/2025/Conference/Submission283/Reviewer_rbd7"
]
],
"structured_content_str": [
"{\"title\": \"Clarification of the question\", \"comment\": \"$\\\\textbf{Q1: Dataset filtering process}$\\\\\\\\\\nI understand that the filtering step ensures that the best samples among 50 generations with CAS metric. I am curious if the CAS metric can truly filter out unreasonable generation that show diverse types of style leakage, \\\\textit{including texture, color, pose, size, and background.} The response and figure 1 does not fully solve my concern if CAS is a robust metric to filter out the content information such as pose, size, and background coming from the style image. I suggest that the authors include extensive list of visual examples of filtering cases without cherry picking.\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer,\\n\\nWe hope this message finds you well. We sincerely appreciate the time and effort you have dedicated to reviewing our submission. We have submitted our rebuttal and would like to follow up to inquire whether our responses have sufficiently addressed your concerns.\\n\\nPlease let us know if you have any remaining questions or require additional clarification. We value your feedback and are eager to ensure our work meets the highest standards.\\n\\nThank you again for your thoughtful insights and guidance.\\n\\nBest regards,\\nCSGO Authors\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"title\": \"Response to Reviewer qQFs\", \"comment\": \"Dear Reviewer qQFs\\n\\n\\nWe have placed **Figures 18-Figure 22 at the end of the Appendix** showing the content image, the style image, and a large number of raw images generated by B-LoRA, respectively. In the bottom left corner of each figure we show the \\u201ctarget image\\u201d, which represents the best result obtained by CAS metrics. In addition, we have set a threshold filter, i.e., a high CAS score, which indicates a high level of content loss, will be filtered directly.\\n\\nI'm very sorry for the misunderstanding caused by the formatting error. We have re-corrected the citation format and re-uploaded the supplementary material. They are placed at the end of the supplementary material (Figure 18-figure 22).\\n\\nBest regards,\\n\\nCSGO Authors\"}",
"{\"title\": \"Official Comment by Authors to Reviewer y9qW(2/2)\", \"comment\": \"**Q4:user study**\", \"a\": \"The design of CSGO's Content Block and Style Block is informed solely by experimental results and the conclusions drawn from InstantStyle. According to InstantStyle, the Style Block is located in the up_blocks.0.attentions.1 layer, although it relies on the original weights of IPAdapter. Consequently, we first validated the effectiveness of up_blocks.0.attentions.1 for style control. Subsequently, we incrementally added additional layers to evaluate the extent of the style transfer capability.\\n\\nThe experimental results are presented in the table below. It is important to note that these results represent early experimental validation and do not involve the use of ControlNet to regulate content. Instead, only Content Blocks and cross-attention layers were utilized to manage content. The Content Block is applied to all blocks except those designated as Style Blocks.\\n\\n\\n|style block|CSD|\\n|:-----------:|:-----------:|\\n|up_blocks.0.attentions.1|0.5239|\\n|up_blocks.0.attentions.1 & up_blocks.0.attentions.2|0.5527|\\n|up_blocks.0.attentions.1 & up_blocks.0.attentions.2 & up_blocks.1|0.5743|\\n|up_blocks|0.5864|\\n|up_blocks & mid_blocks|0.5702|\\n\\n\\nThe experimental results show that higher CSD scores can be obtained when setting up_blocks to style blocks. In addition, we also investigated the setting of overlapping content block and style block. However, we found that this may cause severe conflicts and significantly reduce the style transfer capability. Therefore, we employ decoupled controls.\\n\\n---\\n\\n**If our answers are more in line with your expectations, we kindly invite you to reconsider your initial rating.**\", \"setting\": \"we randomly select 100 sets of results from the test set. Of these, 20 groups of portraits and 20 groups of sketches the rest were randomized. Subsequently, a user research experiment was conducted to compare CSGO with Styleshot-lineart, instantStyle, and Stylealigned respectively. Each group contains four generated results and the user selects the best result from the transfer quality.\\n\\n\\n|VS| CSGO win | Tile | CSGO loss | \\n|:----------------:|:----------------:|:-----------:|:-----------:|\\n|StyleShot| 58.5% |21.4%|20.1%|\\n|Instanstyle| 64.2% |20.6%|15.4%|\\n|StyleAligned| 67.0% |12.3%|10.7%|\\n\\n\\n--- \\n\\n**Q5: Figure 1 serves as the first illustration, and it should be clearly introduced, including the input image, output image, and comparison images, providing visual guidance to facilitate better understanding. This should also include the Figure 1(3) part.**\"}",
"{\"comment\": \"Dear Reviewer qQFs:\\n\\nAs today is the last day to revise the manuscript, I wanted to kindly follow up regarding the concerns you raised earlier. We have already provided detailed responses to address your feedback, but we have not yet received any further comments or suggestions.\\n\\nIf there are any remaining points or clarifications needed, please feel free to let us know. We greatly value your insights and are eager to ensure the final manuscript meets your expectations.\\n\\nThank you for your time and thoughtful consideration.\\n\\nBest regards,\\n\\nCSGO Authors\"}",
"{\"summary\": \"The paper establishes a data pipeline for constructing content-style-stylized image pairs and introduces the IMAGStyle dataset. Additionally, it utilizes this dataset to perform end-to-end training on the proposed CSGO framework, achieving style transfer generation under various input conditions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"A high-quality dataset composed of content-style-stylized image pairs is proposed, which can be useful for research in style transfer.\", \"This paper proposes a method for decoupling content and style, injecting them separately into different locations within the U-Net. Additionally, it combines ControlNet for further integration of the injected features. In this approach, both U-Net and ControlNet have fixed parameters, creating an efficient training framework.\", \"The paper shows many analyses and visualization results of the proposed method, and easy to follow.\"], \"weaknesses\": [\"If the dataset and its construction pipeline are one of the contributions of this paper, it is necessary to provide results using this dataset for training on other baseline methods and compare them with the proposed CSGO framework. This would demonstrate the effectiveness of the dataset and the robustness of the CSGO method.\", \"The proposed method borrows from IP-Adapter and ControlNet. And, the specific inputs for the three proposed Cross-Attention blocks and the method of feature injection have not been clearly explained.\", \"The quantitative evaluation is relatively limited, additional metrics could be included for assessment. On the other hand, regarding qualitative evaluation, the existing visual results do not intuitively reflect the advantages of this method. (Some additional evaluation metrics, such as FID, Aesthetic Scores, and user studies.)\", \"Figure 1 serves as the first illustration, and it should be clearly introduced, including the input image, output image, and comparison images, providing visual guidance to facilitate better understanding. This should also include the Figure 1(3) part.\", \"Suggestions for the format: citations need to be changed to conform more to the standard \\\\citet or \\\\citep; the font formatting and size of all similar-level figures in the paper are inconsistent, and the arrangement of images is rough and needs further improvement.\"], \"questions\": [\"I am curious why the dataset is abbreviated as IMAGStyle and the method is abbreviated as CSGO.\", \"Can this method be applied to multiple content images or multiple style images as reference images to achieve better results?\", \"During the data cleaning phase, CAS is used to validate content consistency. How can we ensure that the style generated by LoRA for the images is correct?\", \"It appears to empirically inject Content into the down block and Style into the UpBlock; could you clarify the rationale behind this choice?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer,\\n\\nWe hope this message finds you well. We sincerely appreciate the time and effort you have dedicated to reviewing our submission. We have submitted our rebuttal and would like to follow up to inquire whether our responses have sufficiently addressed your concerns.\\n\\nPlease let us know if you have any remaining questions or require additional clarification. We value your feedback and are eager to ensure our work meets the highest standards.\\n\\nThank you again for your thoughtful insights and guidance.\\n\\nBest regards,\\nCSGO Authors\"}",
"{\"title\": \"Official Comment by Authors to Reviewer qQFs(1/2)\", \"comment\": \"Thank you for your recognition of our work and for your insightful comments.\\n\\n---\\n\\n**Q1: In the second step of data creation, AdaIN with DINO features is used to filter out images with style leakage. Does this process ensure the removal of style leakage related to factors such as content pose, size, and background? It is necessary to demonstrate that the filtering process addresses various types of style leakage, including texture, color, pose, size, and background.**\", \"a\": \"As shown in Fig. 14 & Fig. 15 of the original paper, we show the results of the ablation experiments for the hyperparameters involved. In addition, we give the most suitable parameters for style transfer.\"}",
"{\"title\": \"Response to Reviewer y9qW\", \"comment\": \"Dear Reviewer y9qW\\uff0c\\n\\nThank you for raising the score! We sincerely appreciate your recognition of our work and your valuable feedback. \\n\\nIn particular, thanks to the reviewer's reminder, we carefully checked and revised the details of the images.\", \"including_but_not_limited_to_the_following_parts\": \"1) adjusted the arrangement of subfigure 2 of Fig. 1 to make it more reasonable.\\n\\n2) Aligned the image alignment of Fig. 2, Fig. 5, Fig. 6, Fig. 7, Fig. 9, Fig. 10, and Fig. 11.\\n\\n3) Aligned the text size in Figures 6, 7 and 8.\\n\\n4) Fixed formatting issues with citep and citet.\\n\\nPlease let us know if further clarifications are needed or if there are any remaining points you would like us to address. We greatly value your feedback and are eager to work towards resolving all concerns. We will continue refining this method and hope to contribute even more impactful work in the future.\\n\\nThank you again for your time and thoughtful input.\\n\\nBest regards,\\n\\nCSGO Authors\"}",
"{\"title\": \"Official Comment by Authors to Reviewer zBGr\\uff081/2\\uff09\", \"comment\": \"We thank the reviewers for recognizing the **motivation is valid** of our work.\\n\\n---\\n\\n**Q1: The authors claim that the performance of image style transfer is limited because of the lack of a large-scale stylized dataset, which makes it impossible to train modes end-to-end. However, the proposed dataset is learned by training and combining different LoRAs, which means the generated stylized data is not the real ground truth for end-to-end training. In fact, the whole framework seems to try to distill the generated dataset in one adaptor.**\", \"a\": \"We thank the reviewer for the introduction and appreciate their observations. We agree that our approach does not conflict with the face-swapping scheme using <source, target, results>. Work [r1] utilizes real images to construct the source and target images in <Source, Target, Result>, which ensures that the target learned by the model comes from the real scene. However, the difference between the face-swapping task and the style transfer task is that it is difficult to construct content images and style images from real images. Style transfer involves both high- and low-dimensional features, such as color, texture, hue, and strokes. Although we can apply some image data-augmentation schemes to fade to other images, it limits the diversity of style transfer. For instance, it was challenging to degrade a furry doll as a target image into other styles as fake content images. Interestingly, our early approach was to generate fake content and style images by fading the style images. However, the results are significantly less effective than using the proposed IMAGStyle dataset. Therefore, it is a feasible way to construct the target image by real content image and real style image.\\n\\nWe believe that ControlNet and IPAdapter have become the most effective and widely accepted methods for feature injection in the era of diffusion modeling. They offer simplicity and reliability, making them ideal for scenarios requiring feature injection. Furthermore, we emphasize that the main contribution of this paper is to provide a set of style transfer dataset construction and cleaning methods while annotating a high-quality segmentation migration dataset. With the support of this dataset, we utilize the mainstream framework to build a simple but effective style transfer framework, CSGO, which enables CSGO to unify three key style control tasks: image-driven style transfer, text-driven stylized synthesis, and text editing-driven stylized synthesis tasks through independent content control and style control.\\n\\n[r1] Huang, Ziyao, et al. \\\"Identity-Preserving Face Swapping via Dual Surrogate Generative Models.\\\" ACM Transactions on Graphics 43.5 (2024): 1-19.\\n\\n---\"}",
"{\"title\": \"Reponse to All Reviewers\", \"comment\": \"We thank the reviewers for their feedback and comments. In particular, we are pleased that they found the article to be well motivated (zBGr), well structured (rbd7,Vjbz), and that the proposed IMAGStyle dataset is of high quality (y9qW), valuable (rbd7,Vjbz), and solid (Vjbz), and of extensive utility (qQFs, y9qW). In addition, they perceived the style transfer results as good (rbd7) with high satisfaction (qQFs). Here we briefly outline the changes made to the manuscript and recurring points in the reviews.\\n\\n**Changes to the Manuscript**\\n\\n1) A discussion of failure cases was added in the supplemental material in Figure 2.\\n\\n2) The presentation of CAS indicator filtering cases was added in Figure 1 of the \\tSupplementary Material.\\n\\n3) Added a comparison with the LoRA methodology, see Figure 3 in the Supplementary Material.\\n\\n4) adjusted the arrangement of subfigure 2 of Fig. 1 to make it more reasonable.\\n\\n5) Aligned the image alignment of Fig. 2, Fig. 5, Fig. 6, Fig. 7, Fig. 9, Fig. 10, and Fig. 11.\\n\\n6) Aligned the text size in Figures 6, 7 and 8.\\n\\n7) Fixed formatting issues with citep and citet.\\n\\n\\n**Q1\\uff1aClarification of the article's main contributions and model structure**\", \"a\": \"We added the human evaluation results. Setting: we randomly select 100 sets of results from the test set. Of these, 20 groups of portraits and 20 groups of sketches the rest were randomized. Subsequently, a user research experiment was conducted to compare CSGO with Styleshot-lineart, instantStyle, and Stylealigned respectively. Each group contains four generated results and the user selects the best result from the transfer quality.\\n\\n\\n|VS| CSGO win | Tile | CSGO loss | \\n|:----------------:|:----------------:|:-----------:|:-----------:|\\n|StyleShot| 58.5% |21.4%|20.1%|\\n|Instanstyle| 64.2% |20.6%|15.4%|\\n|StyleAligned| 67.0% |12.3%|10.7%|\\n\\n\\n**Q3\\uff1aDifference with IP-Adapter, StyleAdapter, InstantID, InstantStyle.**\\n\\n\\nWe show the differences between CSGO and the above methods in the table below. In particular, IP-Adapter and InstantID are different from the proposed tasks applicable to CSGO. Compared to StyleAdapter and InstantStyle, CSGO support more diverse style control tasks, more detailed content and style control capabilities, and high-quality ternary style datasets.\\n\\n\\n|methods|\\tIP-Adapter[1]|\\tStyleAdapter[2]|\\tInstantID[3]|\\tInstantStyle[4]\\t|CSGO|\\n|:----------------:|:----------------:|:-----------:|:-----------:|:-----------:|:-----------:|\\n|Task\\t|Content Consistency maintenance|\\tText-driven stylized synthesis|\\tID Consistency Maintenance|\\tText-driven stylized synthesis\\t|Image-driven style transfer, text-driven stylized synthesis, and text-editor-driven stylized synthesis tasks|\\n|Training data|\\tReconstruction method, no pair data|\\tReconstruction method, no pair data|\\tReconstruction method, no pair data|\\tReconstruction method, no pair\\tdata|The proposed IMAGstyle, triplet data|\\n|Structural Properties\\t|Image features are injected all blocks by IP-Adapter| Image features by PCA are injected all blocks via IP-Adapter\\t| Face features are injected into all modules via IPA, identity net| Based on IPAdapter weights, image features are injected only into up_blocks.0.attentions.1\\t|Separates the content control and style control branches using IPadapter and controlnet. Style features are injected into controlnet and up_blocks through IPadapter respectively, and content features are controlled by controlnet and down_blocks|\\n\\n\\n[1]Ye, Hu, et al. \\\"Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models.\\\" arXiv preprint arXiv:2308.06721 (2023).\\n\\n[2]Wang, Zhouxia, et al. \\\"Styleadapter: A single-pass lora-free model for stylized image generation.\\\" arXiv preprint arXiv:2309.01770 (2023).\\n\\n[3]Wang, Qixun, et al. \\\"Instantid: Zero-shot identity-preserving generation in seconds.\\\" arXiv preprint arXiv:2401.07519 (2024).\\n\\n[4]Wang, Haofan, et al. \\\"Instantstyle: Free lunch towards style-preserving in text-to-image generation.\\\" arXiv preprint arXiv:2404.02733 (2024).\"}",
"{\"title\": \"Official Comment by Authors to Reviewer rbd7 (1/2)\", \"comment\": \"We thank the reviewers for recognizing the **well-written**, **clear structure**, **easy to understand**, and **valuable dataset** of our article.\\n\\n---\\n**Q1\\uff1aClarification of the article's main contributions and model structure**\", \"a\": \"In CSGO, the Controlnet model is fixed-weighted to ensure that content information is as complete as possible after style transfer. Therefore, if the style features are only injected into the up block, **the original content features output by Tile controlnet will be directly injected into the up block and weaken the style information**. Therefore, we inject style features into the Controlnet model in advance so that the output of the controlnet contains pre-merged content and style features.\\nIn fact, the principle of injecting style features in controlnet is similar to that of the base stable diffusion model [1,2]. Fixing the weights of the base model can still adjust the style of the generated image, so it is also effective in the controlnet (i.e., the controlnet model, which structure is smliar with the base model).\\nFinally, from the results of the supplemental quantitative experiments (see table below), the style similarity score CSD for style transfer was significantly improved after the injection of style features into controlnet.\\n\\n| Metric | (1) W/O Content Control | (2) W Content Control W/O style injection in ControlNet| CSGO |\\n|:----------------:|:-----------:|:-----------:|:--------------------:|\\n| CSD |0.5381|0.4873| 0.5146 |\\n| CAS |1.7723|0.8372| 0.8386 |\\n| Aesthetics Score |5.6325|5.5091| 5.5467 |\\n\\n[1]Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv, 2023.\\n\\n[2] Haofan Wang, Qixun Wang, Xu Bai, Zekui Qin, and Anthony Chen. Instantstyle: Free lunch towards style-preserving in text-to-image generation. arXiv, 2024.\\n\\n---\"}",
"{\"comment\": \"Thank you for the prompt response.\\n\\nIt seems like the raw data of the curated style-content images is currently unavailable due to its removal from storage. I wonder if the dataset will be made available upon final submission. Is there an estimated timeline for re-generating the synthetic dataset?\"}",
"{\"summary\": \"This paper presents a method for reference-based image stylization method. To achieve this goal, a style encoder and an image encoder are presented. Features are injected into the diffusion backbone through selected layers. Meanwhile, a 210K (image, style, stylized image) triplet dataset was built and used for training the proposed model. In the experimental results, content similarity and style similarity were evaluated. Comparisons were made between the proposed method and other SOTA methods. Ablation studies are informative and comprehensive.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is pretty complete, including a solid dataset, training details, evaluation comparisons and ablation studies.\\n2. The proposed dataset is useful for future related work\", \"weaknesses\": \"1. The overall training method is not new, which follows the lines of research like IP-adapter, InstanceID, InstanceStyle, and the usage of Ada-In is common too in stylization research.\\n2. In the dataset, the stylized image is fixed while given a content image and style image, a stylized image can be various based on preference. Since the extent to which the content image is stylized is fixed, the proposed network fits to that, which reduced the potential capacity to adapt. I understand there's control factor to adjust to be more stylized or less, but the upper bound is the dataset.\", \"questions\": \"Can you show some failure cases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer,\\n\\nWe hope this message finds you well. We sincerely appreciate the time and effort you have dedicated to reviewing our submission. We have submitted our rebuttal and would like to follow up to inquire whether our responses have sufficiently addressed your concerns.\\n\\nPlease let us know if you have any remaining questions or require additional clarification. We value your feedback and are eager to ensure our work meets the highest standards.\\n\\nThank you again for your thoughtful insights and guidance.\\n\\nBest regards,\\n\\nCSGO Authors\"}",
"{\"title\": \"Official Comment by Authors to Reviewer y9qW(1/2)\", \"comment\": \"Thanks to the reviewers for praising the high quality of the dataset and the methodology is easy to follow .\\n\\n---\\n\\n**Q1\\uff1ait is necessary to provide results using this dataset for training on other baseline methods and compare them with the proposed CSGO framework.**\", \"a\": \"|Metric |Stytr^2|Style-Aligned | StyleID| InstantStyle |StyleShot|StyleShot-lineart|CSGO|\\n|:----------------:|:-----------:|:-----------:|:--------------------:|:-----------:|:-----------:|:--------------------:|--------------------:|\\nFID|3.2729|2.5732|5.1680|2.6308|2.2395|2.1694|2.0391|\\nAesthetics Score|4.0387|3.7463|4.7643|5.4824|5.6728|5.2542|5.5467|\\n\\n\\nWe thank the reviewers for their valuable suggestions and agree that the inclusion of additional metrics is warranted. As a result, we have provided further quantitative metrics, which are presented in the table below. Intuitively, when the content and style do not blend well, the aesthetic score tends to be lower, and the FID (Fr\\u00e9chet Inception Distance) will also decrease. Based on these observations, we believe that CSGO remains a highly competitive model for style transfer.\\n\\n---\"}",
"{\"comment\": \"Thank you to the author for the reply. The author's response has addressed most of my concerns regarding the paper and has helped enhance its overall completeness. I will adjust my rating accordingly.\\nHowever, I still have some reservations about the contribution of the method and the formatting issues in the revised version.\"}",
"{\"summary\": \"The authors introduce a large dataset, IMAGStyle, consisting of 210k triplets, to train a style transfer model via a simple feature injection technique. To construct IMAGStyle, they collect arbitrary pairs of content and style images, (i) apply style transfer to the content images, and (ii) filter out stylized images that exhibit content leakage from the style images. They propose a straightforward adapter and controlnet based architecture with modification in cross attention and feature injection layers.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(i) The curated dataset, IMAGStyle, encompasses a broad range of content-style images, demonstrating extensive applicability.\\n\\n(ii) The visual quality of the stylized image samples presented in the paper and appendix is highly satisfactory.\\n\\n(iii) The concept is straightforward, and the methodology is direct.\", \"weaknesses\": \"(i) In the second step of data creation, AdaIN with DINO features is used to filter out images with style leakage. Does this process ensure the removal of style leakage related to factors such as content pose, size, and background? It is necessary to demonstrate that the filtering process addresses various types of style leakage, including texture, color, pose, size, and background.\\n\\n(ii) The comparison with existing methods appears to omit several relevant works. There are more recent, well-performing approaches that utilize textual inversion variants, LoRA/DreamBooth variants, and training-free methods.\\n\\n(iii) There are too many variants of hyper-parameters that affect the quality of image sample: content scale $\\\\delta_c$, another content scale $\\\\lambda_c$, style scale $\\\\lambda_s$, and cfg scale. There needs to be specific hyper-parameter setting that generally leads to satisfactory results.\", \"questions\": \"(i) What are the primary differences between the proposed method and existing adapter variants, such as Ip-Adapter (Ye et al., 2023) and StyleAdapter (Wang et al., 2023), which leverage content-style pairs for adapter training? These models are noted for their limited generalizability to style images that were not included in the training data. Does CSGO exhibit the same limitation? Please provide examples using style images that include random cartoon characters not present in the WikiArt dataset.\\n\\n(ii) In addition to qualitative samples and CSD similarity measurements, please include human evaluation results on randomly sampled stylized images.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer,\\n\\nWe hope this message finds you well. We sincerely appreciate the time and effort you have dedicated to reviewing our submission. We have submitted our rebuttal and would like to follow up to inquire whether our responses have sufficiently addressed your concerns.\\n\\nPlease let us know if you have any remaining questions or require additional clarification. We value your feedback and are eager to ensure our work meets the highest standards.\\n\\nThank you again for your thoughtful insights and guidance.\\n\\nBest regards,\\nCSGO Authors\"}",
"{\"title\": \"Response to Reviewer qQFs\", \"comment\": \"Dear Reviewer qQFs\\uff1a\\n\\nWe appreciate your thoughtful feedback. Here, we would like to provide clarifications to address some potential misunderstandings.\\n\\nOf course, the cleaned data, the IMAGStyle dataset, will certainly be published. What we are removing from storage is the raw data that has been generated directly by B-LoRA, and they are dirty data. It seems to us that these unprocessed data have no value. We will publish the IMAGStyle dataset obtained by CAS processing. Finally, if more style transfer data triples are needed, we can generate and then clean and filter them through the proposed CSGO, which is less costly and efficient compared to B-LoRA. We also hope that this framework and method can promote the development of style transfer.\\n\\nBest regards,\\n\\nCSGO Authors\"}",
"{\"comment\": \"Dear Reviewer rbd7\\n\\nAs today is the last day to revise the manuscript, I wanted to kindly follow up regarding the concerns you raised earlier. We have already provided detailed responses to address your feedback, but we have not yet received any further comments or suggestions.\\n\\nIf there are any remaining points or clarifications needed, please feel free to let us know. We greatly value your insights and are eager to ensure the final manuscript meets your expectations.\\n\\nThank you for your time and thoughtful consideration.\\n\\nBest regards,\\n\\nCSGO Authors\"}",
"{\"comment\": \"We appreciate all reviewers valuable comments. We were wondering if our responses have addressed your concerns. Please let us know if you have additional questions. Thank you!\"}",
"{\"comment\": \"Thank you to the authors for addressing some of the issues.\\n\\nHowever, I still have reservations about the dataset filtering system using CAS.\\n\\nIn the appendix, the authors appear to have omitted the CAS filtering results for some reason. Specifically, Figures ?? are not shown in the Appendix, as follows:\\n\\n> *we show 10 sets of CAS filtering examples in Figures ??. These cases show that CAS\\ncan clean illogical generated graphs for pose, size, and so on. However, we emphasize that since\\nB-LoRA is actually more stable for the generation of styles, it is up to us to filter the images with\\nCSD. In our experiments, it is possible to filter using only CAS without CSD.*\\n\\nI believe that the curation of a large-scale style-content dataset is a critical step in this area of research. Therefore, the dataset curation process should be presented in greater detail.\\n\\nAs a result, the rating remains unchanged.\"}",
"{\"metareview\": \"This work introduced a style encoder and an image encoder for style transfer task by using feature injection through selected layers, and collected a new 210K dataset for training such a model. While reviewers appreciate good results and the effort of dataset collection, there are also several major common concerns raised. It overall received three borderline reject and two borderline accept, while being a bit diverged but leaning towards negative. Two main weaknesses lie in the novelty (similarity with prior work like IP-adaptor or AdaIN) and the validity of collected (generated) dataset. The rebuttal unfortunately did not convince reviewers to obviously change their opinion, on the widely used idea of feature injection in prior arts and lack of evidence to prove the collected dataset is really useful. After checking all the comments and discussions, AC agrees that more contributions in method design and more in-depth analysis are needed to make this work more solid. Therefore a decision of reject is made and authors are encouraged to revise based on the comments for future resubmission.\", \"additional_comments_on_reviewer_discussion\": \"Two main weaknesses lie in the novelty (similarity with prior work like IP-adaptor or AdaIN) and the validity of collected (generated) dataset. Authors do provide some explanations via rebuttal but AC agrees with reviewers that the difference of this work compared to other feature injection based method is incremental. In addition, three reviewers who gave the score of 5 (marginally below) are with highest confidence score 5. So AC puts more weight on their opinion compared to the other two reviewers who gave the score of 6. Thus AC made the reject decision.\"}",
"{\"summary\": \"This study proposes a diffusion-based stylized image generation method. The authors claim that the lack of paired data for training models limits the performance of popular stylization methods. To this end, the authors propose an augmented dataset by training various LoRAs for different contents and styles. Then, an IP-adaptor style framework is trained on the collected dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Overall, the reviewer feels the motivation is valid. Given that many image-generation tasks are ill-posed and lacking ground truth, trying to find the paired data for supervised learning is valid. Also, the reviewer would like to express appreciation for the efforts in collecting the dataset, which may be quite time-consuming. The proposed CSGO is reasonable and easy to follow.\", \"weaknesses\": [\"The authors claim that the performance of image style transfer is limited because of the lack of a large-scale stylized dataset, which makes it impossible to train modes end-to-end. However, the proposed dataset is learned by training and combining different LoRAs, which means the generated stylized data is not the real ground truth for end-to-end training. In fact, the whole framework seems to try to distill the generated dataset in one adaptor.\", \"Most image generation tasks are ill-posed and lack ground truth. A similar idea goes to [r1] ''Identity-Preserving Face Swapping via Dual Surrogate Generative Models.'' Face-swapping methods try to fuse one source image with one target image. Similar to the setting of image style transfer, no ground truth information could be collected as image style transfer tasks for face-swapping tasks. Thus, the authors of [r1] tried to generate the <source, target, results> triplets. A more careful analysis of the pros and cons of using such generated data is given in [r1]. However, in this study, the author claims that style transfer lacks a larger-scale stylized dataset without careful analysis or support.\", \"The proposed method, CSGO, is just a combination of many existing techniques. For content control, the two strategies are the combination of ControlNet and IP-Adaptor. For style control, they employ Perceiver Resampler structure as Alayrac et al. to project the style features and then do some trivial modifications to controller or ip-adaptor. The reviewer understands that the authors need to verify the usefulness of the proposed dataset. However, such a method could not make a good contribution to ICLR.\", \"The proposed evaluation index has the same problem. Using AdaIN for content and style evaluation is common sense in a related field, but it could not be counted as a contribution.\", \"No user study is conducted. Technical writing should be paid more attention. For example, most of the cross-reference format is wrong (Maybe there is a mistake in using cite citet and citep)\"], \"questions\": \"Could the collected dataset supplement the training of traditional style transfer methods? For example, the collected pair can be used to tune Stytr or CAST. Using the collected dataset and the training pipeline of traditional style transfer tasks, could the method perform much better than before?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Comment by Authors to Reviewer qQFs(2/2)\", \"comment\": \"**Q4\\uff1aWhat are the primary differences between the proposed method and existing adapter variants, such as Ip-Adapter (Ye et al., 2023) and StyleAdapter (Wang et al., 2023), which leverage content-style pairs for adapter training?**\", \"a\": \"Thanks to the reviewer's suggestion, we added the human evaluation results.\", \"setting\": \"we randomly select 100 sets of results from the test set. Of these, 20 groups of portraits and 20 groups of sketches the rest were randomized. Subsequently, a user research experiment was conducted to compare CSGO with Styleshot-lineart, instantStyle, and Stylealigned respectively. Each group contains four generated results and the user selects the best result from the transfer quality.\\n|VS| CSGO win | Tile | CSGO loss | \\n|:----------------:|:----------------:|:-----------:|:-----------:|\\n|StyleShot| 58.5% |21.4%|20.1%|\\n|Instanstyle| 64.2% |20.6%|15.4%|\\n|StyleAligned| 67.0% |12.3%|10.7%|\\n\\n--- \\n\\n**If our answers are more in line with your expectations, we kindly invite you to reconsider your initial rating.**\"}",
"{\"comment\": \"Dear Reviewer zBGr:\\n\\nAs today is the last day to revise the manuscript, I wanted to kindly follow up regarding the concerns you raised earlier. We have already provided detailed responses to address your feedback, but we have not yet received any further comments or suggestions.\\n\\nIf there are any remaining points or clarifications needed, please feel free to let us know. We greatly value your insights and are eager to ensure the final manuscript meets your expectations.\\n\\nThank you for your time and thoughtful consideration.\\n\\nBest regards,\\n\\nCSGO Authors\"}",
"{\"title\": \"Response to Reviewer qQFs\", \"comment\": \"A:We thank the reviewers for their feedback.\\nFirst, the raw data was cleaned due to storage limitations. In response to the reviewers\\u2019 questions, we retrained multiple sets of B-LoRA. Subsequently, CAS cleaning was applied, **as shown in Figures 18,19, 20, 21, and 22 of the Supplementary Material**. In total, **10 sets** of generated images were filtered using CAS. These examples demonstrate that CAS effectively cleans illogical generated graphs in terms of pose, size, and other inconsistencies. However, we emphasize that since B-LoRA is inherently more stable for style generation, we rely on CSD for filtering these images. In our experiments, filtering with CAS alone is feasible without the need for CSD. For style images generated by arbitrary LoRA, we propose using CSA to compute style similarity and filter out images with inconsistent colors.\\n\\nWe would like to show that CAS is very effective and intuitive for cleaning our style transfer data (comparing pixel-level differences after DINO feature removal style). Whether it is effective for the rest of the complex scenarios is something that needs to be further verified.\"}",
"{\"comment\": \"Dear Reviewer Vjbz:\\n\\nAs today is the last day to revise the manuscript, I wanted to kindly follow up regarding the concerns you raised earlier. We have already provided detailed responses to address your feedback, but we have not yet received any further comments or suggestions.\\n\\nIf there are any remaining points or clarifications needed, please feel free to let us know. We greatly value your insights and are eager to ensure the final manuscript meets your expectations.\\n\\nThank you for your time and thoughtful consideration.\\n\\nBest regards,\\n\\nCSGO Authors\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer,\\n\\nWe hope this message finds you well. We sincerely appreciate the time and effort you have dedicated to reviewing our submission. We have submitted our rebuttal and would like to follow up to inquire whether our responses have sufficiently addressed your concerns.\\n\\nPlease let us know if you have any remaining questions or require additional clarification. We value your feedback and are eager to ensure our work meets the highest standards.\\n\\nThank you again for your thoughtful insights and guidance.\\n\\nBest regards,\\nCSGO Authors\"}",
"{\"title\": \"Official Comment by Authors to Reviewer rbd7 (2/2)\", \"comment\": \"**Q3: Ablation study results of W Content Control W/O style injection?**\", \"a\": \"Thanks to the reviewer for your valuable responses.\\nIn response to the reviewers' suggestions, we incorporated quantitative results from the feature-injected ablation experiments. Additionally, we introduced the aesthetic score proposed by Reviewer 3 as a reference metric. The tabulated results reveal that the CSD score improves after stylistic features are injected into the ControlNet branch, indicating that the generated style aligns more closely with the input stylized image. Furthermore, the aesthetic score confirms that this modification does not diminish the visual appeal of the generated image.\\n\\n\\n| Metric | (1) W/O Content Control | (2) W Content Control W/O style injection in ControlNet| CSGO |\\n|:----------------:|:-----------:|:-----------:|:--------------------:|\\n| CSD |0.5381|0.4873| 0.5146 |\\n| CAS |1.7723|0.8372| 0.8386 |\\n| Aesthetics Score |5.6325|5.5091| 5.5467 |\\n\\u00a0\\n\\nFrom the quantitative results, it can be found that the lack of content control and the rise in CAS metrics indicate that textual prompts alone cannot maintain the original image content information. And after the introduction of content control, the CAS index decreases significantly, indicating that the content control branch plays the role of ensuring that the content is not lost. Meanwhile, it can be found that when the style features are injected into controlnet, the style features can be more significantly migrated to the content images, improving the quality of style transfer.\\n\\n---\\n\\n**If our rebuttal better aligns with your expectations, we respectfully request that you reconsider your initial rating.**\"}",
"{\"title\": \"Official Comment by Authors to Reviewer Vjbz\", \"comment\": \"We thank the reviewers for recognizing the **pretty complete**, **solid dataset**, and **useful** of our article.\\n\\n---\\n\\n**Q1: Difference of CSGO with recent research like IP-adapter, InstanceID, InstanceStyle, and the usage of Ada-In is common too in stylization research.**\", \"a\": \"Thanks to the reviewer\\u2019s suggestion, we have added the failure cases to the supplementary material. First, for real portrait stylization, as shown in the first row, there is a potential loss of facial identity. Portrait images can be difficult to collect due to the privacy issues involved, leading to some limitations in CSGO's style migration for real portraits.\\nSecond, despite incorporating styles into the ControlNet and base model, CSGO may still leak information, such as the original image's color. This phenomenon is due to the fact that the dataset still has insufficient pair data and needs to be further expanded using existing models (e.g. CSGO).\\nIn the future, we aim to enhance the CSGO framework in several ways. First, we plan to use CSGO in conjunction with LoRA to improve the portrait of the IMAGStyle dataset and enhance portrait stylization capabilities. Second, we will redesign and train the content encoder and style encoder to minimize content leakage and style leakage. However, we acknowledge that these improvements may not be achievable in the short term.\\n\\n\\n---\\n\\n**If our answers are more in line with your expectations, we kindly invite you to reconsider your initial rating, which is more confident for us to explore at a later stage.**\"}",
"{\"title\": \"Official Comment by Authors to Reviewer zBGr\\uff082/2\\uff09\", \"comment\": \"**Q3: The proposed evaluation index has the same problem. Using AdaIN for content and style evaluation is common sense in a related field, but it could not be counted as a contribution.**\", \"a\": \"We thank the reviewers for their valuable suggestions. In response, we retrained the StyTr^2 [1] model using the IMAGStyle dataset. To comprehensively evaluate the performance of IMAGStyle, we conducted two experiments. First, we retrained StyTr^2 using only IMAGStyle. Second, we fine-tuned StyTr^2 using IMAGStyle, leveraging the released model weights pre-trained for 160,000 steps.\\nStyTr^2 employs a non-trivial training approach, wherein the model is implicitly constrained to produce results with content closely aligned to the content image and style closely aligned to the style image. The primary advantage of IMAGStyle lies in its <content, style, target> triplet structure. To further enhance the performance of StyTr^2, we introduced explicit pixel-level constraints by incorporating MSE loss. This addition enforces the generated results to be closer to the target map within the triplet, thereby improving style transfer fidelity.\\n\\n\\n| Metric |Stytr^2 | Stytr^2 on IMAGStyle| Fine-tuning Stytr^2 on IMAGStyle |\\n|:----------------:|:-----------:|:-----------:|:--------------------:|\\n| CSD |0.2695| 0.3430 | 0.3597 |\\n| CAS |0.9699|0.9332| 0.9280 |\\n| Aesthetics Score |4.1387|4.5146| 4.6975 |\\n\\n\\u00a0\\nWe retrained and fine-tuned the steps on the 8 A800-80G machines for 1W steps\\uff0c batchsize is 24, and the results are shown in the table below. It can be clearly observed the effectiveness of IMAGStyle's triple data for the style transfer model.\\n\\n[1] Deng Y, Tang F, Dong W, et al. Stytr2: Image style transfer with transformers. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 11326-11336.\\n\\n--- \\n\\n**If our answers are more in line with your expectations, we kindly invite you to reconsider your initial rating.**\"}",
"{\"summary\": \"This paper was well written with clear structure and easy to understand. The paper firstly proposes a high quality and carefully cleaned dataset with 210k Content-StyleStylized Image Triplets. Then, the paper proposes a new style transfer framework CSGO, which uses independent content and style feature injection modules to achieve high-quality image style transformations. Finally, a new score matrix named CAS was introduced to measure content loss after content-style transferred.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The open-source dataset of the article is valuable to the community. The experimental results reflected in the article are good.\", \"weaknesses\": \"The method proposed in the article is relatively simple and tends to be stacked. Although the author claims that this is an end-to-end approach, its innovation is insufficient.\", \"questions\": \"The feature injection amplification mentioned in the article are common methods, except for inject style features into Controlnet. What is the principle explanation for the operation mentioned in the paper \\u2014\\u201cThe insight of this is to pre-adjust the style of the content image using style features making the output of the Controlnet model retain the content while containing the desired style features.\\u201d Actually, I can't see its importance from Fig 9. (2) W Content Control W/O style injection in ControlNet. More ablation study need to be supplemented with style similarity (CSD) and content alignment (CAS) Matrix.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}"
]
} |
E3LDsbUSRZ | CliBench: A Multifaceted and Multigranular Evaluation of Large Language Models for Clinical Decision Making | [
"Mingyu Derek Ma",
"Chenchen Ye",
"Yu Yan",
"Xiaoxuan Wang",
"Peipei Ping",
"Timothy S Chang",
"Wei Wang"
] | The integration of Artificial Intelligence (AI), especially Large Language Models (LLMs), into the clinical diagnosis process offers significant potential to improve the efficiency and accessibility of medical care. While LLMs have shown some promise in the medical domain, their application in clinical diagnosis remains underexplored, especially in real-world clinical practice, where highly sophisticated, patient-specific decisions need to be made. Current evaluations of LLMs in this field are often narrow in scope, focusing on specific diseases or specialties and employing simplified diagnostic tasks. To bridge this gap, we introduce CliBench, a novel benchmark developed from the MIMIC IV dataset, offering a comprehensive and realistic assessment of LLMs' capabilities in clinical diagnosis. This benchmark not only covers diagnosis from a diverse range of medical cases across various specialties but also incorporates tasks of clinical significance: treatment procedure identification, lab test ordering and medication prescriptions. Supported by structured output ontologies, CliBench enables a precise and multi-granular evaluation, offering an in-depth understanding of LLM's capability on diverse clinical tasks of desired granularity. We conduct a zero-shot evaluation of leading LLMs to assess their proficiency in clinical decision-making. Our preliminary results shed light on the potential and limitations of current LLMs in clinical settings, providing valuable insights for future advancements in LLM-powered healthcare. | [
"Clinical Decisions",
"Large Language Model",
"Benchmark"
] | Reject | https://openreview.net/pdf?id=E3LDsbUSRZ | https://openreview.net/forum?id=E3LDsbUSRZ | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"tSwIZR3BBZ",
"k1VrvXtkf5",
"Nb6XpzwMGv",
"MNTmJdOX2t",
"6iovSy174T",
"5HocBLUcKU",
"5FGG7B43vp",
"2JhFHcDC3H"
],
"note_type": [
"official_review",
"official_review",
"decision",
"meta_review",
"official_review",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1730696444085,
1730705990314,
1737523812799,
1734769245202,
1730513103497,
1733131444599,
1730652283390,
1730712487119
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7048/Reviewer_Tkst"
],
[
"ICLR.cc/2025/Conference/Submission7048/Reviewer_YzcX"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7048/Area_Chair_9iQe"
],
[
"ICLR.cc/2025/Conference/Submission7048/Reviewer_P3hN"
],
[
"ICLR.cc/2025/Conference/Submission7048/Reviewer_N2Z1"
],
[
"ICLR.cc/2025/Conference/Submission7048/Reviewer_h2gq"
],
[
"ICLR.cc/2025/Conference/Submission7048/Reviewer_N2Z1"
]
],
"structured_content_str": [
"{\"summary\": \"This paper introduces CLIBENCH, a comprehensive benchmark for evaluating large language models in clinical decision-making, designed to address limitations of prior clinical evaluations. Unlike previous benchmarks that focus narrowly on single diseases or specialties, CLIBENCH provides a broad, multi-specialty assessment across key clinical tasks: diagnoses, procedure recommendations, lab test ordering, and prescriptions. Built on the MIMIC-IV dataset, CLIBENCH employs structured medical ontologies to support multi-granular evaluation, enabling both coarse and fine-grained assessments. The authors evaluate a range of LLMs, including general-purpose, instruction-tuned, and domain-specific models, in a zero-shot setting, revealing strengths and limitations of current models in clinical contexts and highlighting areas for further development in LLM-powered healthcare.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The submission introduces CLIBENCH, a well-structured benchmark that addresses clinical decision-making, covering diagnoses, procedures, lab test orders, and prescriptions. This benchmark extends beyond typical disease-focused evaluations\\u200b\\n\\n1. Real-World Clinical Relevance: By leveraging MIMIC-IV dataset and aligning tasks with real-world clinical workflows, such as lab test ordering and treatment procedure identification, CLIBENCH captures the complexity of real clinical settings, which bridge a benchmark's relevance to practical applications\\u200b\\n\\n2. Metrics: detailed, multi-granular evaluation metric across various clinical tasks, at different abstraction levels, such as chapter and category for ICD-10 codes\\u200b\\n\\n3. Evaluation: a comprehensive evaluation of multiple open-source and closed-source LLMs, as well as one finetuned model, presenting clear results on the performance gaps\", \"weaknesses\": [\"Reliance on Zero-Shot Evaluations: Although the benchmark provides valuable insights, the study primarily relies on zero-shot evaluations, despite the sensitivity of LLM performance to prompt configurations. Exploring few-shot settings or incorporating multi-prompt variations could enhance robustness and offer a more comprehensive assessment of the benchmark's applicability.\", \"\\\"Domain-specialized models do not work\\\": This statement could benefit from a more nuanced analysis, with additional experiments to substantiate the findings. A deeper exploration into why domain-specific adaptations underperform in this context would provide valuable insights and help clarify the challenges involved.\", \"Lower Performance on Specific Tasks: Performance on several tasks, particularly procedures and lab test orders, is noticeably lower. Further investigation would clarify whether this discrepancy reflects inherent model limitations or potential areas to strengthen the benchmark's design.\", \"Ambiguity in Prompt Construction: While prompt construction is outlined, the study lacks an in-depth evaluation of prompt effectiveness across different clinical tasks. Analyzing how specific prompt structures influence model performance could yield valuable insights and improve task-specific optimization.\"], \"questions\": [\"Domain-Specific Model Performance: The claim that domain-specialized models don\\u2019t improve performance needs clarification. Could this underperformance be due to model training limitations, data biases, or potential improvements needed in benchmark design?\", \"Zero-Shot Evaluation: Given LLMs' sensitivity to prompts, would the authors consider testing few-shot settings or experimenting with prompt variations to assess models robustness?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This work introduces a new LLM benchmark to assess LLMs' capabilities in clinical diagnosis using the MIMIC-IV database. This benchmark primarily covers four tasks: predicting discharge diagnoses, procedures, lab test orders, and prescriptions. Since entries in MIMIC-IV are stored in a structured format with expert ontology, this enables multi-granular evaluation from coarse to fine levels. For experimental studies, this work presents the zero-shot performance of various LLMs (Section 5), along with additional analysis (Section 6).\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The new large-scale LLM evaluation benchmark in the medical domain could be highly beneficial for the research community.\", \"A variety of LLMs were evaluated in this paper, allowing researchers to observe trends and understand which models are at least operational within the MIMIC-IV dataset and the four clinical tasks suggested in this work.\"], \"weaknesses\": [\"The main issue with this work is that the task formulation does not closely align with real-world clinical decision-making. Although the paper emphasizes that its tasks are grounded in clinical diagnosis (e.g., L61, Section 3.1) due to the use of the realistic MIMIC-IV dataset, there are significant aspects to consider, such as (1) the clinical decision-making process and (2) the benchmarking process itself.\", \"Regarding the connection of this benchmark to clinical decision-making, predicting ICD-10-CM diagnosis and ICD-10-PCS procedure codes is primarily a medical billing task, not a direct diagnosis or treatment task, which the authors have already recognized (see Section F. Limitations).\", \"Even if the benchmark assumes the ability to perform a billing task or a reasonable clinical workflow, its effectiveness as a benchmark is quite limited. Specifically, it is unclear whether sufficient input data has been properly formatted or extracted from MIMIC-IV for each task, raising doubts about whether a perfect score is achievable. Additionally, there is limited insight into how well LLMs are performing in this context. At a minimum, the paper should provide a baseline score, such as an expert clinician score or a majority-class prediction, to contextualize the LLMs' performance.\"], \"questions\": [\"Beyond the seven insights from LLM results (bolded text in Section 5.1), what is the clinical significance of GPT-4o achieving a score of 27.58 in the full code for the diagnosis decision task? How far is this from clinician behavior?\", \"Similarly, in the statement \\\"Models are less familiar with procedures and lab orders\\\" (bolded text in Section 5.2), are clinicians familiar with procedures and lab orders at a fine-grained level such that they could achieve a 100% score?\", \"During the patient journey from admission to discharge, there is a significant amount of structured data in the MIMIC-IV database beyond clinical notes. Is it possible to predict these tasks without using all available patient data? If not 100%, what is the upper bound for each task?\", \"Additionally, there are truncated instances during benchmarking, which means that even if LLMs perform perfectly, some may not be able to achieve their potential high scores. I think the evaluation dataset instances should ideally fit within an 8K context length, or there should be enough LLMs capable of handling the full context length of the dataset.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"metareview\": [\"The paper introduces CliBench, a benchmark using MIMIC-IV to evaluate LLMs in clinical decision-making tasks, providing insights into their potential, limitations, and suitability for real-world healthcare applications.\", \"Strengths\", \"CliBench introduces a benchmark for evaluating LLMs in clinical decision-making, covering diverse and realistic tasks like diagnoses, treatments, lab tests, and prescriptions\", \"The benchmark employs structured medical ontologies (ICD-10, LOINC, ATC) for fine-grained assessments\", \"Weaknesses\", \"Tasks like ICD-10 code prediction align more with medical billing than real-world clinical decision-making, raising doubts about the benchmark's relevance for actual clinical workflows\", \"The reliance on zero-shot settings and lack of few-shot or prompt variation, and the lack of human clinician baselines all adds up to limited evaluation\", \"Potential data leakage from the MIMIC-IV dataset, reliance on a single clinical center's data, and exclusion of negative cases undermine the benchmark's validity and generalizability.\"], \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, authors acknowledged core limitations of their work such as lack of prompt variation or lack of human baselines, but mostly promised to follow up in the future work.\"}",
"{\"summary\": \"The paper introduces CLIBENCH, a new benchmark for evaluating Large Language Models in clinical decision-making. CLIBENCH utilizes the MIMIC-IV dataset to offer a multi-faceted evaluation across various specialties and clinical tasks, including diagnosis, procedure identification, lab test ordering, and medication prescription. CLIBENCH employs structured ontologies (ICD-10-CM, ICD-10-PCS, LOINC, ATC) to enable precise, multi-granular evaluation. The authors conduct a zero-shot evaluation of several LLMs, highlighting their potential and limitations in clinical settings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The use of ontologies and hierarchical evaluation provides a more nuanced understanding of LLM capabilities at different levels of granularity.\\n\\nThe inclusion of multiple clinical tasks across various specialties makes the open-source benchmark more representative of real-world clinical practice.\", \"weaknesses\": \"The use of publicly available MIMIC-IV data raises concerns about potential data leakage during LLM pre-training. While the authors discuss this, they should consider strengthening their argument by conducting further analysis or experiments to quantify the impact of potential leakage. In addition, solely on MIMIC-IV from a single medical center limits the generalizability of findings.\\n\\nThe absence of few-shot/ICL experiments restricts the understanding of LLM adaptability and learning potential in these tasks. Zero-shot setting on either clinical reasoning or outputting desired ontologies/structured outputs is not a realistic task.\\n\\nWhile the authors justify using billing codes as a proxy for \\\"collective knowledge,\\\" including a physician performance baseline would provide a more direct comparison and better contextualize LLM performance. \\n\\nGround truth relies solely on EHR records (line 198). The authors do not mention obtaining physician agreements or validation of the ground truth labels used in CLIBENCH.\", \"questions\": \"How sensitive are the results to variations in prompt phrasing? Did the authors explore different prompt variations?\\n\\nCould the authors elaborate on the limitations of using billing codes as a proxy for ground truth diagnoses?\\n\\nHow might CLIBENCH be extended to incorporate temporal aspects of clinical decision-making? As the patients are not really coming in with all the notes.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Looking forward to the author response\", \"comment\": \"This paper generally offers a valuable benchmark for evaluating LLMs\\u2019 proficiency in clinical diagnosis, and the proposed benchmark has the potential to facilitate the development of medical LLMs. If the authors can address my concerns in detail, I would be open to updating my score.\"}",
"{\"summary\": \"This paper presents CliBench, a benchmark including four important clinical tasks - diagnosis, treatment procedures, lab test orders, and prescriptions, developed from the publicly available MIMIC IV dataset. This paper conducts a comprehensive evaluation of the zero-shot performance of leading LLMs, including both open-source and proprietary models, general domain and medical specific models, on these four tasks. The findings highlight both the potential and the limitations of current LLMs in real-world clinical decision support.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is clearly written and easy to follow.\\n2. The selected four tasks are both common and important in real-world clinical settings. A benchmark including these tasks provides researchers in this field with a more comprehensive and practical way to evaluate LLMs for real-world clinical applications.\\n3. This paper compares the zero-shot performance of a wide range of LLMs, and evaluates them at different levels of diagnostic, procedure, lab test, and prescription codes. This approach enables a thorough evaluation of the capabilities of LLMs in clinical decision-making across varying degrees of resolution, offering a detailed understanding of their strengths and weaknesses.\\n4. The insights gained from analyzing the zero-shot performance of various leading LLMs are very interesting, and many shed light on potential future research directions such as post-training and instruction tuning.\", \"weaknesses\": \"1. For procedures, lab tests, and prescriptions, the inputs are patient profiles and medical records. Is this information sufficient for human clinical experts to make real-world decisions? If clinicians need to rely on additional data, such as ICU monitoring, it might be unrealistic to expect LLMs to make accurate decisions solely based on such information. Section 6.3 shows that missing important patient information can significantly impact LLM performance. If this subset of information aligns with what clinicians use, it would be helpful to justify the design and the validity of the benchmark in the paper.\\n\\n2. Section 3.3 mentions that admissions without records for the target tasks are filtered out. For AI systems deployed for decision support, I think it is also important for them to recognize cases where no procedure, lab test, or prescription is required (i.e., true negatives). Is there a plan to include such \\\"negative\\\" admissions in the evaluation set, or a justification that they should not be included?\\n\\n3. In section 3.6, micro-level precision, recall, and F1 are used as metrics. Figure 1(b) shows significant variance in the number of cases across disease types. Including macro-level metrics or disease type-specific metrics (similar to the patient breakdown analysis in Section 6.2) would provide more insights into the strengths and weaknesses of LLMs across different disease types, e.g. some LLMs might perform well on certain disease types but poorly on others, while other LLMs might demonstrate different behavior.\", \"note\": \"I am not a clinician and may not fully understand certain aspects of clinical decision-making. I am open to changing my score based on clarifications regarding these concerns.\", \"questions\": \"1. In line 209 - 214, the sampling are conducted on different levels for different targets, e.g. top-level for procedure codes and third-level for lab tests. What is the consideration behind different levels? It might be helpful to mention it clearly in the paper.\\n2. In line 237 - 239, is the BERT used general domain or biomedical specific? If it is a general-domain paper, can the sentence embeddings accurately capture medical specific terms and determine the similarity between the LLM responses and code descriptions?\\n3. In line 338 - 340, is the LLaMA3 SFT fine-tuned by next-token prediction on ground-truth diagnosis, procedures etc.? For admissions with multiple diagnosis (or other tasks), how would the order of multiple labels in the output impact the SFT performance? Did you do any permutation or data processing to prevent potential bias?\\n4. In section 5, it might be helpful to use the actual model names rather than row numbers when referring to models, so that readers would not have to refer back to Table 3 to understand the performance differences between different models.\\n5. In Table 4, it is very interesting that all models achieve 99% F1 for Level 1, and also high F1 for Level 2. Is it because there are only 2/5 candidates for lab test order level 1/2, so that models can achieve good performance easily? I think it would be helpful if you could provide one or two sentence explanation for this so that it gives readers more intuitive insights on this performance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper presents a new benchmark CliBench developed from the MIMIC IV dataset, offering a comprehensive and realistic assessment of LLMs\\u2019 capabilities in clinical diagnosis. Specifically, the authors construct four clinical decision-making tasks, requiring LLMs to predict the clinical codes of diagnoses, procedures, lab tests, and prescriptions based on the information recorded in the EHR of a patient. The dataset is constructed through a rule-based NLP pipeline and verified by a clinical NLP expert. The author tested the performance of a range of LLMs on the constructed dataset, and the results show that current LLMs generally perform poorly on these clinical decision-making tasks, especially in procedure prediction. The author further analyzed the impact of patient attributes, task difficulties, and clinical data elements on diagnostic tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors provide an open-source, multi-task evaluation set covering various clinical decision-making tasks, which will be highly beneficial for the development of future medical LLMs.\", \"The authors conducted a systematic evaluation on a total of 20+ LLMs, including the powerful GPT-4o model.\", \"The authors provide a detailed analysis of performance on diagnostic tasks, which may offer insights for the application of LLMs in diagnostics.\"], \"weaknesses\": \"My main concern lies in the evaluation approach of this work. Unlike other mainstream medical benchmarks, the authors use clinical code prediction as the downstream task and employ precision, recall, and F1-score as performance metrics for the four tasks. While this approach is indeed closer to the real deployment environment of medical AI, it also introduces the following issues:\\n\\n1.\\t**Test Prompt**: I noticed that the authors prompt the language model with \\u201cprovide as many diagnoses as you can until you are not confident about your diagnosis decision.\\u201d Has the impact of different prompt styles on performance been tested? The current prompt format seems to encourage outputting as many predicted codes as possible, which may be a key reason why, in Figure 3c, the F1-score increases as the number of ground-truth diagnoses grows. I believe it is necessary to test prompts with different phrasing (e.g., \\u201cplease provide an appropriately sized set of diagnoses\\u201d) to further improve the stability of the evaluation results.\\n\\n2.\\t**Answer Extraction**: I noticed that the authors allow the language model to output either clinical codes or predictions in text form. For text-based predictions, they use a BERT model pretrained on 1B sentence pairs to calculate sentence similarity and select the closest code as the predicted result. I have the following questions:\\n\\t+ When the model provides both code and text-based results, is priority given to extracting the code or to parsing the text? In such cases, is the accuracy higher when parsing the code directly or when interpreting the text result?\\n\\t+ What is the accuracy of this text parsing method based on sentence embeddings, and has any related analysis been conducted? Are there alternative methods that could further improve matching accuracy?\\n3.\\t**Human Physician Performance**: Although the authors provide some reasons in the appendix for not including human physician performance, I still believe it is essential to add human performance data (even if on a small scale) for this dataset. First, the tasks in this dataset are inherently challenging; for example, ICD-10-CM contains over 70,000 codes, and ICD-10-PCS has over 87,000 codes. Even for medical experts, accurately completing the coding without consulting the specific ICD-10 code set is very difficult. Including evaluation results from human experts under close-book conditions would help us better understand the benchmark\\u2019s upper bound, which is highly valuable for this evaluation set.\", \"questions\": \"1. It is necessary to provide more details about the rule-based NLP pipeline used to construct the dataset.\\n2. What is the specific process for the clinical expert\\u2019s verification? Why was only one expert used, rather than multiple experts for cross-verification?\\n3. Most of the models evaluated in the paper are under 10 billion parameters, with only one model, Llama3-70B Instruct, having 70 billion parameters. More 70 billion parameter models should be assessed to enhance the comprehensiveness of the evaluation, such as Med42, ClinicalCamel, and others.\\n4. While GPT-4o performs over 70 on the L1 setting other 3 task types, its performance in procedure prediction is only 29.80, even lower than that of Llama3-70B Instruct. I believe it is essential to conduct further case studies to uncover the underlying reasons for this discrepancy.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
E36NHwe7Zc | Evaluating Large Language Models through Role-Guide and Self-Reflection: A Comparative Study | [
"Lili Zhao",
"Yang Wang",
"Qi Liu",
"Mengyun Wang",
"Wei Chen",
"Zhichao Sheng",
"Shijin Wang"
] | Large Language Models fine-tuned with Reinforcement Learning from Human Feedback (RLHF-LLMs) can over-rely on aligned preferences without truly gaining self-knowledge, leading to hallucination and biases. If an LLM can better access its knowledge and know what it knows, it can avoid making false or unsupported claims. Therefore, it is crucial to evaluate whether LLMs have the ability to know what they know, as it can help to ensure accuracy and faithfulness in real-world applications. Inspired by research in Educational Psychology, surface learners who don’t really know are easily affected by teacher and peer guidance, we treat LLM as a student, incorporate role guidance in prompts to explore whether LLMs really know. Specifically, we propose a novel strategy called Role-Guided and Self-Reflection (RoSe) to fully assess whether LLM “knows it knows”. We introduce multiple combinations of different roles and strong reminder in prompts combined with self-reflection to explore what local information in prompt LLMs rely on and whether LLMs remain unaffected by external guidance with varying roles. Our findings reveal that LLMs are very sensitive to the strong reminder information. Role guidance can help LLMs reduce their reliance on strong reminder. Meanwhile, LLMs tend to trust the role of authority more when guided by different roles. Following these findings, we propose a double-calibrated strategy with verbalized confidence to extract well-calibrated data from closed-source LLM and fine-tune open-source LLMs. Extensive experiments conducted on fine-tuning open-source LLMs demonstrate the effectiveness of double-calibrated strategy in mitigating the reliance of LLMs on local information. For a thorough comparison, we not only employ public JEC-QA and openBookQA datasets, but also construct EG-QA which contains English Grammar multiple-choice question-answering and 14 key knowledge points for assessing self-knowledge and logical reasoning. | [
"LLMs",
"Verbalized confidence",
"Shortcut learning"
] | Accept (Poster) | https://openreview.net/pdf?id=E36NHwe7Zc | https://openreview.net/forum?id=E36NHwe7Zc | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zUe6p6sJ6t",
"yl67A02qww",
"sy93ZGJqVA",
"rp8IWEeq2W",
"qbFYxPoX1f",
"qAYffaCEKq",
"ocNQvGzQt5",
"o0H9a7BUJm",
"mJMWVqctes",
"k2HQ8FLcnc",
"gL2oUl5yV9",
"dJAz6O8B2I",
"cUgxCsh0vq",
"b7dPyZrp5q",
"b5N1vHJ63S",
"aCpHwaouJt",
"U9gy33ovc5",
"Qz0udIblGs",
"QpImlU5KSY",
"LWVnS8rQ2E",
"L9PhEdkFzk",
"J5c8Yru6NG",
"ITp3s03FOH",
"HzwhdsTs3i",
"HIQK4JxjYl",
"Dxqylz9hKF",
"8qN3AeGmww",
"8pSsyaaZ0A",
"8Ey1qFj2dZ",
"85coIQXiaJ",
"6L7gYM80eA",
"4SEaaNftTB",
"2MRUUuP0z3",
"0qSU95UuZ5"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732505990060,
1732679759588,
1732679131256,
1732505666797,
1732098554118,
1732097240586,
1737524274867,
1730623085021,
1732097469699,
1732505124186,
1732098445315,
1732098160265,
1732096858714,
1732096067366,
1729872491607,
1730598982318,
1732826052922,
1732680069161,
1732852264109,
1732628018264,
1734676451131,
1732567470665,
1732711923180,
1732694143609,
1732711587471,
1732708141993,
1732505267969,
1732097615971,
1732096203865,
1732678408208,
1732707550379,
1729465242025,
1732244191244,
1732097913480
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission13663/Reviewer_nbt9"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Reviewer_mKeV"
],
[
"ICLR.cc/2025/Conference/Submission13663/Reviewer_JdDF"
],
[
"ICLR.cc/2025/Conference/Submission13663/Reviewer_JdDF"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Reviewer_mKeV"
],
[
"ICLR.cc/2025/Conference/Submission13663/Area_Chair_Ronq"
],
[
"ICLR.cc/2025/Conference/Submission13663/Reviewer_fznJ"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Reviewer_fznJ"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Reviewer_mKeV"
],
[
"ICLR.cc/2025/Conference/Submission13663/Reviewer_fznJ"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13663/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Dear reviewer fznJ,\\n\\nAs the open discussion period draws to a close in a few days, we want to check back to see whether you have any remaining concerns. Thank reviewer fznJ again for engaging with our work thoughtfully and constructively. We have provided global responses for all reviewers to highlight several supplements to the paper. In addition, we also believe that we have sufficiently responded to your earlier queries on various aspects of this work, and we provide a short summary here for your convenience:\\n\\n\\n- Specific, fine-grained evaluation method (RoSe strategy) and novel fine-tuning open-source LLMs (double-calibrated strategy).\\n- The detailed introduction of RoSe strategy, calibrated fine-tuning, and well-calibrated data.\\n- The reason for choosing EG-QA is to help with fine-tuning, without manual annotation, which is currently available [here](https://anonymous.4open.science/r/EG-QA-C2B2).\\n- Clarify some confusion: logical consistency, model calibration ability, conversion of verbal confidence level into scores.\\n- More experimental results confirm the link between \\\"not really know\\\" and \\\"easily affected\\\".\\n\\n\\nPlease let us know if/how we can address any remaining concerns, and we are grateful for any additional feedback and suggestions.\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"Response to Reviewer fznJ (3/4)\", \"comment\": \"**C3.** I don't think the reason is convincing enough for not choosing more widely-used datasets such as BBH or MATH. According to your shared dataset, it seems that all questions are in English. I wonder why you claim the dataset to be bilingual. In addition, I'm not sure whether you have noticed that a lot of Unicode white space characters are used in your dataset (marked as red in the anonymous GitHub). Would it cause some trouble (not errors, but incorrect word separations) in tokenization?\\n\\n**Answer**: Thanks for your question! We have added additional instructions in the README file and would like to address both your questions and the importance of EG-QA.\\n\\n- Bilingual EG-QA: Since EG-QA contains English examinations **for Chinese students**, **the corresponding question stem** (clarifying the task, providing background information, setting requirements and constraints) **is in Chinese**, and some questions have stems that contain details about the exam, such as particular region and grade in which it took place. **We are not sure whether this violates the anonymity policy, so we removed the stem part**. In addition to the stem part, EG-QA also includes Chinese explanations for certain unfamiliar words. We can give some examples [here](https://anonymous.4open.science/r/EG-QA-C2B2) that do not violate the anonymity policy. When the EG-QA is officially released, we will disclose the whole dataset. \\n\\n- Data processing: We have normalized the white space characters to a standard space character (U+0020) before tokenization when loading data.\\n\\n- EG-QA: The core issue is the need for testing environments after fine-tuning LLMs: \\n\\n 1. Current QA datasets often cannot guarantee they weren't employed during pre-training or RLHF processes (**data contamination** issue commented by Reviewer nbt9). This undermines the model's ability to generalize and evaluate new or unseen data. If the model has memorized answers from the QA datasets used in pre-training, it could fail to properly reason through questions, simply recalling answers it has already encountered, rather than aligning through fine-tuning. \\n\\n 2. Existing benchmark **lacks clear ID and OOD test environment**, failing to properly evaluate how well a model generalizes to new, unseen data after fine-tuning and hindering the assessment of generalization after fine-tuning. Meanwhile, it will cause an **over-optimistic evaluation** problem, if only ID data is used for evaluation, the model may appear to generalize well, but in reality, it may only perform well on data similar to what it was trained on. This could lead to misleading conclusions about the model\\u2019s real-world performance. It is crucial to understand how well a model performs outside of its training environment by evaluating ID and OOD data. \\n\\n The EG-QA dataset addresses these issues with newly collected data and contains diverse knowledge points, providing clear ID and OOD splits.\\n \\n---\\n\\n\\n**C4.** I'm looking forward to seeing your code. It would be better if you could adapt your code to PyTorch and CUDA (I'm not sure whether PyTorch supports Ascend NPUs) for easier evaluation or adaptation of your method.\\n\\n**Answer**: Thanks for your question! Since the paper has not yet been published and ICLR is an open platform, we found that there are already a few people looking at this repository, we are releasing the code for fine-tuning LLaMA3-8B [here](https://anonymous.4open.science/r/EG-QA-C2B2). We hope this will help! The code for fine-tuning models is all based on PyTorch. The experimental setup is detailed in section 5.2. Both Qwen and LLaMA3 are fine-tuned on A100-80G GPUs, and only Spark is fine-tuned on Ascend 910B 64G NPUs. The Ascend 910B also supports PyTorch, which can refer to https://github.com/Ascend/pytorch.\\n \\nFurthermore, the open-source community has already made fine-tuning widely and easily accessible, so fine-tuning itself is not the main contribution of the paper. One of key contributions of our work is in developing a double-calibrated strategy for extracting high-quality CoT processes, which are essential for improving the model's performance during fine-tuning. The fine-tuned data, models, and code will all be open-sourced after the paper is published (**mentioned in footnote 8**).\\n\\n---\\n\\n**C5.** Then why p and q are removed from the conditional terms?\\n\\n**Answer**: Thanks for your detailed question! We want to emphasize logical consistency, i.e., the model response is updated after each reasoning step based on the current reasoning result. The internal answer and confidence scores are consistent with its CoT process, which is also a reflection of the model's calibration ability.\\n\\nHowever, considering the confusion caused by the logical consistency $P(a,c|r)$, we have modified it in the paper (highlighted in green). \\n\\n---\"}",
"{\"title\": \"Response to Reviewer fznJ (2/4)\", \"comment\": \"**C2.** I appreciate your revision of the narration. It would be better if you could better illustrate the entire pipeline of the article, including the fine-tuning part.\\n\\n**Answer:**\\nThanks for your question! we explained the main structure of the paper in the previous answer. Two main (Rose, double-calibrated) strategies are proposed to help evaluate and fine-tune LLMs, respectively. \\nConsidering that the reviewer is mainly confused about fine-tuning, we would like to introduce the **process** and **goal** of fine-tuning in detail. The process of fine-tuning is to optimize the model parameters based on task-specific data. In the following fine-tuning objective optimization definition, we want to minimize the loss function, Let the gap between the predicted output of the model $M_{\\\\theta}(q \\\\oplus \\\\wp)$ and the actual reasoning process $(r \\\\oplus a \\\\oplus c)$ be minimized.\\n\\n> $ \\\\theta^* = \\\\arg\\\\min_{\\\\theta} \\\\mathcal{L}(M_{\\\\theta}(q \\\\oplus \\\\wp), (r \\\\oplus a \\\\oplus c)).$\\n\\nIn task-specific fine-tuning, it often requires high-quality human annotations, including detailed CoT processes ($r$) and answers ($a$). However, with the double-calibrated strategy, we can automate obtain $r \\\\oplus a \\\\oplus c$ from the strong closed-source LLM. Instead of simply focusing on the data that answers correctly, we focus on the data that still answers correctly during the self-reflection and the role guidance process, which contains high-quality reasoning. Meanwhile, coupled with confidence calibration ($c$), we can obtain the data that LLM \\\"really knows\\\". \\n\\nFurthermore, we mention in the main paper:\\n\\n> We propose thought-based and calibrated fine-tuning methods to align the Chain-of-Thought (CoT) process with corresponding confidence levels at each reflection step.\\n\\nWhere the **thought-based method** is the fine-tuning model **alignment reasoning process $r$**, and **the calibrated fine-tuning** process is to fine-tune LLMs to **align answer $a$ and confidence $c$**, i.e., the model calibration capability. It also corresponds to the **double-calibrated strategy** we mentioned in the previous answer, respectively. Therefore, by acquiring such well-calibrated data, we can enable open-source LLMs to learn robust, high-quality CoT data during the fine-tuning process, while simultaneously improving the ability to self-reflect.\\n\\n\\n---\"}",
"{\"comment\": \"Dear reviewer mKeV,\\n\\nAs the open discussion period draws to a close in a few days, we want to check back to see whether you have any remaining concerns. Thank reviewer mKeV again for engaging with our work thoughtfully and constructively. We have provided global responses for all reviewers to highlight several supplements to the paper. In addition, we also believe that we have sufficiently responded to your earlier queries on various aspects of this work, and we provide a short summary here for your convenience:\\n\\n- The answer settings in multiple-choice QA tasks.\\n- More experiments with subtle cue information (Appendix B.7).\\n- More evaluation metrics by GPT-4 and human annotation are employed to assess the internal consistency (Appendix B.6).\\n- The KD-questions settings on JEC-QA.\\n- The reference and discussion of the works recommended by the reviewer.\\n\\nPlease let us know if/how we can address any remaining concerns, and we are grateful for any additional feedback and suggestions.\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"Response to Reviewer fznJ (4/4)\", \"comment\": \"**Comment8**: Is there any evidence proving that this also holds for LLMs? This paper shows that LLMs are affected to different degrees by different types of guidance, but it does not directly build the link between \\\"not really know\\\" and \\\"easily affected\\\".\\n\\n**Answer**: Thanks for the valuable question! The paper is based on some findings in educational psychology, students who don\\u2019t really know are easily affected by teacher and peer guidance. In LLMs, \\\"not really know\\\" data can include two kinds of questions:\\n\\n- questions where LLM answers incorrectly or does not know the answer.\\n\\n- questions where LLM answers correctly but with low confidence, indicating it might change the answer and doesn\\u2019t truly know it.\\n\\nWe extract the data in the two-step prompt setting (w/o RoSe setting). The data that LLM answers correctly in two steps is regarded as \\\"know\\\". and data that consistently results in incorrect answers over two steps or changes from correct to incorrect after reflection, is considered as \\\"not really know\\\". The experimental results of the two groups of data under the RoSe strategy on EG-QA are as follows:\\n\\nThe experimental results of RoSe strategy on \\\"know\\\" data:\\n\\n| Role | Reminder | Cue | step-1 acc | step-1 conf | step-2 acc | step-2 conf | step-3 acc | step-3 conf | overall acc | overall conf |\\n| ---- | -------- | ---- | ---------- | ----------- | ---------- | ----------- | ---------- | ----------- | ----------- | ------------ |\\n| T | \\u2713 | t | 0.9921 | 0.8765 | 0.9942 | 0.9318 | 0.9964 | 0.9848 | 0.9942 | 0.9042 |\\n| T | \\u2713 | r | 0.9590 | 0.8797 | 0.9633 | 0.9321 | 0.9619 | 0.9727 | 0.9614 | 0.9059 |\\n| C | \\u2713 | t | 0.9814 | 0.8750 | 0.9828 | 0.9310 | 0.9864 | 0.9814 | 0.9835 | 0.9030 |\\n| C | \\u2713 | r | 0.9679 | 0.8826 | 0.9701 | 0.9357 | 0.9693 | 0.9772 | 0.9691 | 0.9092 |\\n\\nThe experimental results of RoSe strategy on \\\"not really know\\\" data:\\n\\n| Role | Reminder | Cue | step-1 acc | step-1 conf | step-2 acc | step-2 conf | step-3 acc | step-3 conf | overall acc | overall conf |\\n| ---- | -------- | ---- | ---------- | ----------- | ---------- | ----------- | ---------- | ----------- | ----------- | ------------ |\\n| T | \\u2713 | t | 0.4511 | 0.8358 | 0.4360 | 0.9064 | 0.4736 | 0.9584 | 0.4536 | 0.8711 |\\n| T | \\u2713 | r | 0.3106 | 0.8343 | 0.3030 | 0.9135 | 0.3030 | 0.9609 | 0.3055 | 0.8739 |\\n| C | \\u2713 | t | 0.3511 | 0.8373 | 0.3740 | 0.9007 | 0.3969 | 0.9477 | 0.3740 | 0.8690 |\\n| C | \\u2713 | r | 0.2519 | 0.8312 | 0.2290 | 0.8974 | 0.2137 | 0.9446 | 0.2315 | 0.8643 |\\n\\n\\n\\nOverall, GPT-4 shows a slight impact from different roles when processing the \\\"know\\\" data. However, when handling \\\"not really know\\\" data, the influence of different roles is more pronounced, resulting in a difference of over 15%. Besides, in the \\\"not really know\\\" data, the model's calibration ability is also worse, and the overall confidence level is lower compared to in the \\\"know\\\" data. Therefore, we can build the link between \\\"not really know\\\" and \\\"easily affected\\\" in LLMs: **LLMs are easily affected by role guidance when they \\\"don't really know\\\".** Meanwhile, we can also consider questions that are easily affected by role guidance and lead to changes in answers as \\\"not really know\\\" data.\\n\\nWe will improve our paper based on all the constructive comments.\"}",
"{\"title\": \"Response to Reviewer mKeV (1/3)\", \"comment\": \"We thank the reviewer for the thoughtful and detailed comments. We are pleased that the reviewer considers our research that better align with real-world reasoning patterns. We appreciate the opportunity to address the concerns here.\\n\\n---\\n\\n**Comment1**: This work mostly used random answers to mislead the model, but they didn\\u2019t explain in detail how these answers were generated to ensure they\\u2019re diverse and realistic.\\n\\n**Answer**: Thank you for the insightful comment. Our primary test scenario aligns with the findings in educational scenarios that \\\"students who don't really know are easily affected by teacher and peer guidance\\\". We collect EG-QA and employ JEC-QA as test suites comprising multiple-choice examination questions, tailored for middle school students and law candidates, respectively.\\n\\nWe make evaluations on multiple-choice QA task, answers are usually identified by a letter (e.g., A, B, C, D). Therefore, both truth and random answers are letters, which ensures the experimental results' stability, consistency, and reproducibility. In the evaluation of random answers, we utilize a fixed seed to generate random letters for each question, the probability that the generated answer differs from the correct answer is about 75%.\\n\\n---\\n\\n**Comment2**: The misleading information in the tests was mostly straightforward or basic incorrect answers. But in real-world scenarios, misleading information is often subtler or harder to detect. This setup might not fully capture the challenges the model would face in real-life situations.\\n\\n**Answer**: Thanks for your insightful suggestion! In this paper, we treat the LLM as the student, so we evaluate the performance of LLMs in both English and domain-specific law examinations. \\n\\nReferencing to prompt information can be subtle in real-world scenarios as the reviewer suggests. Meanwhile, considering that the answers provided by the teacher or the classmate are not independent of the question itself, the LLM will not believe the information provided by the role guidance if it is not related to the question itself. We substitute the the letter options in prompts into their corresponding textual descriptions (each letter corresponds to a specific text content or answer description). This transformation makes the prompt information more complex and requires deeper understanding and processing by LLMs. \\n\\nThe experimental results on EG-QA are as follows, where $t_c$ and $r_c$ denote the textual descriptions of truth and random answer:\\n\\n| Role | Reminder | Cue | step-1 acc | \\u0394 | step-1 conf | step-2 acc | \\u0394 | step-2 conf | step-3 acc | \\u0394 | step-3 conf | overall acc | overall conf |\\n| ---- | -------- | ----- | ---------- | ------- | ----------- | ---------- | ------- | ----------- | ---------- | ------- | ----------- | ----------- | ------------ |\\n| w/o | \\u2717 | \\u2717 | 0.9108 | - | **0.8889** | 0.9159 | - | **0.9676** | - | - | - | 0.9134 | 0.9283 |\\n| T | \\u2713 | t | **0.9431** | +0.0323 | 0.8726 | **0.9450** | +0.0291 | 0.9295 | **0.9494** | +0.0334 | 0.9825 | **0.9458** | 0.9282 |\\n| T | \\u2713 | r | _0.9070_ | -0.0038 | 0.8752 | 0.9108 | -0.0051 | 0.9302 | 0.9101 | -0.0058 | 0.9716 | 0.9093 | 0.9257 |\\n| C | \\u2713 | t | 0.9322 | +0.0214 | 0.8717 | 0.9335 | +0.0176 | 0.9287 | 0.9373 | +0.0213 | 0.9785 | 0.9343 | 0.9263 |\\n| C | \\u2713 | r | 0.9085 | -0.0023 | 0.8781 | _0.9092_ | -0.0067 | 0.9325 | _0.9067_ | -0.0092 | 0.9741 | _0.9081_ | 0.9282 |\\n| T | \\u2713 | $t_c$ | **0.9390** | +0.0282 | 0.8796 | **0.9433** | +0.0274 | 0.9328 | **0.9457** | +0.0298 | 0.9715 | **0.9427** | 0.9062 |\\n| T | \\u2713 | $r_c$ | _0.8700_ | -0.0408 | 0.8810 | _0.8688_ | -0.0471 | 0.9312 | _0.8639_ | -0.0520 | 0.9623 | _0.8676_ | 0.9061 |\\n| C | \\u2713 | $t_c$ | 0.9194 | +0.0086 | 0.8852 | 0.9219 | +0.0060 | 0.9365 | 0.9225 | +0.0066 | 0.9698 | 0.9212 | 0.9109 |\\n| C | \\u2713 | $r_c$ | 0.8741 | -0.0367 | 0.8885 | 0.8796 | -0.0363 | 0.9372 | 0.8778 | -0.0381 | 0.9679 | 0.8772 | 0.9128 |\\n\\nUnder the influence of subtle cue information, the overall performance of GPT-4 is lower than that under letter cue information, and the overall conclusions of the experimental results are consistent with the findings in Section 5.4.1. Compared to letter options, LLM is less sensitive to text cue information, and it is difficult to associate it with the option content in the original question. Since it cannot distinguish between relevant and misleading information, it causes LLM to become distracted and unable to reason and answer questions effectively. We analyze the experimental results in Appendix B.7 of the updated version.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"This paper studies and evaluates whether large language models (LLMs) are confident in their acquired knowledge. It claims that LLMs fine-tuned with RLHF could potentially over-rely on aligned preferences, instead of truly gaining the knowledge, and that if LLMs are more confident in the knowledge. To qualitatively assess whether LLMs have a sense of whether it has any knowledge, a Role-Guided and Self-Reflection (RoSe) method is proposed. Specifically, it combines prompting and self-reflection to examine the sensitivity of LLMs to parametric knowledge and contextual knowledge. In the paper, several findings are elaborated. For example, empirical results reveal the LLMs are sensitive to the prompt. By assuming roles, LLMs are prone to be less dependent on the contextual knowledge. Based on the findings, the authors further propose a calibration-based method to extract high-quality SFT data. Fine-tuning on the SFT data improves the overall confidence when LLMs generate outputs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The motivation is convincing. Previous studies have revealed that deep learning models suffer from confidence calibration. To assess the confidence level of LLMs is an important topic and would benefit a wide spectrum of the NLP community.\", \"The experimental results are quite interesting and the findings are refreshing.\"], \"weaknesses\": [\"Some details seem not clear to me. For example, what is exactly the _verbalized confidence_?\", \"It seems that the fine-tune portion of the experiments are all conducted on the EG-QA dataset, which is proposed in this submission as well. Whether the dataset suffer from data contamination needs serious examination.\", \"The proposed method mainly considered three factors to examine the confidence of LLM outputs (role, cue, etc.) There could be various other factors that have impact on the confidence (pre-trainining data, SFT data, preference data). Massive amount of studies on these factors might be needed to compose a \\\"comprehensive\\\" study.\", \"I think Section 4.2 could use some improvement. After reading it, it is still unclear to me how to conduce the so-called \\\"double-calibration\\\". I suggest the authors use some examples or diagrams to further illustrate.\"], \"questions\": [\"What is exactly the _verbalized confidence_?\", \"How is the _double calibration_ achieved?\", \"How are the new datasets curated? How to make sure they are of high quality?\", \"What is step-3 in the experiment section?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer mKeV (2/3)\", \"comment\": \"**Comment3**: The model\\u2019s self-knowledge is mainly judged by its confidence levels and accuracy. These indicators alone might not be enough to fully capture how well the model truly understands its answers.\\n\\n**Answer**: Thanks for the valuable suggestion! In this paper, we make evaluations and propose double-calibrated strategy to combine accuracy and confidence scores. The well-calibrated data could help extract high-quality reasoning data to improve the reasoning ability of open-source LLMs. Considering the suggestions of the reviewer, we further evaluate the internal consistency of the LLM in the self-reflection process, we make a detailed analysis in Appendix B.6 of updated version.\\n\\nFirst, we employ GPT-4 ($Con_{GPT}$) and human annotation ($Con_{human}$) to evaluate the internal consistency between the reasoning steps of the model on the challenging samples. We consider the samples where the LLM makes errors during the two-step reflection process to be challenging. In EG-QA, approximately 8% of the data consists of these challenging samples, which are not really known to the LLM.\\n\\nWe utilize the prompt in Table 14 (in the updated version) and human annotation to evaluate the reasoning consistency of LLM in three steps (In step-1, we prompt LLMs output answers. In step-2, we prompt LLMs to self-reflect on the answer of the previous step and further answer the question. In step-3, we employ different role guidance to evaluate the performance of LLMs.).\\n\\nSpecifically, the consistency between step-1 and 2 shows little difference between GPT-4 and human annotations. However, on the consistency between step-2 and 3, the human annotations demonstrate higher consistency. Although the logical expression from step-2 to step-3 is consistent, GPT-4 annotations tend to focus more on semantic consistency, often overlooking the progression of logical expression. Overall, the logical reasoning in the self-reflection process across the three steps is consistent for LLMs.\\n\\nThen, in $Guidance$ of experimental results, we manually annotate whether the responses of these challenging samples in step-3 follow the role guidance information. Consistent with findings in **RQ3 of Section 5.4.1**, LLMs tend to trust the role of authority more and more easily affected by authority-teacher.\\n\\n| Role | Reminder | Cue | step-1&2 $Con_{GPT}$ | step-1&2 $Con_{human}$ | step-2&3 $Con_{GPT}$ | step-2&3 $Con_{human}$ | step-3 $Guidance$ |\\n| ---- | -------- | ---- | -------------------------- | ---------------------------- | -------------------------- | ---------------------------- | --------------- |\\n| T | \\u2713 | t | 0.9548 | 0.9473 | 0.7669 | 0.9248 | 0.4210 |\\n| T | \\u2713 | r | 0.9545 | 0.9772 | 0.7954 | **0.9924** | **0.4318** |\\n| C | \\u2713 | t | **0.9923** | **0.9923** | 0.8778 | 0.9618 | 0.3816 |\\n| C | \\u2713 | r | 0.9236 | 0.9312 | **0.8854** | 0.9923 | 0.2213 |\\n\\n---\\n\\n\\n**Comment4**: Roles like \\u201cjudge\\u201d often require objectivity and caution, which might make the model more conservative in its responses. This cautious approach could limit the model\\u2019s effectiveness, especially in tasks that require flexible reasoning or hypothesis testing.\\n\\n**Answer**: Thanks for the constructive question! As illustrated in line 277-281 (revision line 274-279), the test scenario is mainly on multiple-choice questions in the context of legal professional examination. We mainly evaluate on Knowledge-Driven questions (KD-questions), which mainly focus on the **fixed knowledge** corresponding to the law articles, such as civil law, commercial law, criminal law, etc. When dealing with case analysis questions, LLMs might respond more conservatively. However, for questions relying on established knowledge, they don\\u2019t need to be conservative and should focus on selecting the appropriate answer.\\n\\n---\"}",
"{\"comment\": \"Dear reviewer nbt9,\\n\\nAs the open discussion period draws to a close in a few days, we want to check back to see whether you have any remaining concerns. Thank reviewer nbt9 again for engaging with our work thoughtfully and constructively. We have provided global responses for all reviewers to highlight several supplements to the paper. In addition, we also believe that we have sufficiently responded to your earlier queries on various aspects of this work, and we provide a short summary here for your convenience:\\n\\n1. The detailed introduction of verbalized confidence and \\\"step-3\\\" in the experimental section.\\n2. No data contamination issue in EG-QA.\\n3. The \\\"double-calibrated\\\" strategy in Section 4.2 has been improved.\\n4. The paper reveals why the model confidence is affected at the data and model levels.\\n\\nPlease let us know if/how we can address any remaining concerns, and we are grateful for any additional feedback and suggestions.\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"Response to Reviewer fznJ (3/4)\", \"comment\": \"**Comment5**: The narration in Lines 180--182 is confusing. Does it mean the answer and confidence are generated only based on the reasoning chain, without seeing the original prompts?\\n\\n**Answer**: Thanks for the great question! Answers and confidence levels are generated incrementally. Although the overall process is still dependent on the original question $q$ and prompt $\\\\wp$, the model is updated after each reasoning step based on the current reasoning result. Therefore, the final answer and confidence are always associated with the original $q$ and $\\\\wp$. As illustrated in the paper: \\n\\n>We want to maximize the conditional probability of $r$, $a$, $c$: $P(y|\\\\wp,q) = P(r,a,c|\\\\wp,q)$. Based on logical consistency $P(a,c|r)$, we could obtain $P(r|\\\\wp,q) \\\\cdot P(a,c|r,\\\\wp,q)$, signifying $P(r|\\\\wp,q) \\\\cdot P(a,c|r)$. \\n\\nLogical consistency here does not mean discarding $q$ and $\\\\wp$ completely but refers to the internal self-consistency of LLM, that is, the reasoning and the final answer at each step in the reasoning analysis $r$ should be logical, and the confidence $c$ should be consistent with the correctness of the answer $a$. Thus, although the reasoning analysis can be generated independently, it still relies on $q$ and $\\\\wp$ to maintain a suitable reasoning path and ensure the soundness of the chain of thought.\\n\\n---\\n\\n**Comment6**: The results tables (2,3,4,5) show poor model calibration. In fact, the verbal confidence scores\\n\\n**Answer**: Thanks for your question! Actually, model calibration refers to the consistency between the confidence score of the model output and the actual accuracy. As we mentioned several findings in RQ4 of Section 5.4.1 in the evaluation stage:\\n\\n- It is first evident that the confidence of GPT-4 increases through reflection steps, while LLMs show overconfidence at step-3 under different strategies.\\n\\n- It is observed that LLMs exhibit the highest level of confidence in settings where shortcuts (reminders) are easy to capture, consistent with findings in deep neural models.\\n\\n- Notably, despite the high confidence levels, *the overall confidence level of LLMs in settings with random cues is lower than that in settings with truth cues.* This is consistent with their performance accuracy.\\n\\n- The overall confidence of the LLMs at step-3 decreases under the guidance of different roles compared to the no-role guidance, which is similar to student performance and reflects their uncertainty.\\n\\nIn addition, in the fine-tuning stage, the calibration ability of fine-tuned LLM has improved, and the gap between confidence scores and accuracy is narrowing as shown in Tables 4 and 5.\\n\\n---\\n\\n**Comment7**: How was the verbal confidence level such as \\\"very confident\\\" converted to scores?\\n\\n**Answer**: Thanks for the detailed question! In Appendix A.2, we mentioned:\\n\\n> In the statistics of experiments, since there are few cases of non-numerical confidence levels and it is difficult to quantify, we compute on numerical confidence levels.\\n\\nIf verbalized confidence is not expressed as a numerical value, confidence does not produce a progressive relationship like the numerical scores with the deepening of the reflection steps. Therefore, it is difficult for us to directly quantify it as a numerical value, which is not only unfair in numerical statistics but also fails in reflecting the confidence level of LLM directly. Considering the few cases (10-20%), we only compute on numerical confidence levels.\\n\\n---\"}",
"{\"title\": \"Response to Reviewer fznJ (2/4)\", \"comment\": \"**Comment3**: Some choices are not fully explained. For example, why the authors choose to do the main evaluation on the self-developed EG-QA dataset rather than other open-source datasets such as BBH, which also provides CoT chains in their answers.\\n\\n**Answer**: Thanks for your insightful question! As we mentioned in Section 5.1, the EG-QA dataset primarily consists of standardized multiple-choice examination questions collected from [website](http://www.zxxk.com/), which is an authoritative educational resources website, designed for Chinese teachers and students to provide teaching resources, learning resources, providing examination papers, teaching courseware and other text materials. The data in EG-QA include questions, standard answers, and knowledge points, requiring no additional manual annotation. The data originates from after December 2023, ensuring high quality and no data contamination issues. EG-QA is currently available [here](https://anonymous.4open.science/r/EG-QA-C2B2). \\n\\nInspired by findings in educational psychology, we adopt exam questions to evaluate LLMs under RoSe strategy. To ensure high stability, consistency, and reproducibility in evaluations, the options (letters) of multiple-choice QA tasks serve as cue information in the RoSe strategy without deviating from the question itself, ensuring stable assessments. Although there are several open-source multiple-choice datasets (BBH [1], RACE [2], MT-test [3], etc.), they fail to meet some requirements of the paper:\\n\\n- **Test Environment for Fine-Tuning LLMs**: \\nIn this paper, we propose a double-calibrated strategy to effectively fine-tune open-source LLMs. Since the fine-tuning process is prone to overfitting, it is necessary to conduct a full evaluation of both ID and OOD datasets. Current QA datasets often cannot guarantee they weren't employed during pre-training or RLHF processes (data contamination issue commented by Reviewer nbt9). They also lack clear ID and OOD divisions, hindering the assessment of generalization. The EG-QA dataset addresses these issues with newly collected data and contains diverse knowledge points, providing clear ID and OOD splits. In addition, fine-tuning an LLM might affect the LLM's ability in commonsense reasoning, and we also employ open-source openBookQA for detailed evaluation.\\n\\n- **Bilingual Multiple-Choice QA Dataset**: \\nEG-QA is designed for Chinese students and includes both English and Chinese content, making it a bilingual dataset that better assesses model performance. For example, we discovered that LLaMA3-8B still faces challenges with bilingual questions. The questions require models to comprehend and integrate multiple information sources, enhancing their reasoning and application of internal knowledge. Therefore, EG-QA provides a valuable open-source project for the NLP community and researchers needing English multiple-choice QA datasets.\\n\\n- **No Manual Annotation Required**: \\nEG-QA is entirely based on real exam questions from students across various regions in China, accurately reflecting the educational scenarios, which avoids the influence of human annotation preference (issue commented by Reviewer nbt9).\\n\\n[1] Suzgun M, Scales N, Sch\\u00e4rli N, et al. Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them, Findings of ACL, 2023.\\n\\n[2] Lai G, Xie Q, Liu H, et al. RACE: Large-scale ReAding Comprehension Dataset From Examinations, EMNLP 2017: 785-794.\\n\\n[3] Hendrycks D, Burns C, Basart S, et al. Measuring Massive Multitask Language Understanding, ICLR 2021.\\n\\n---\\n\\n**Comment4**: The reproducibility might be an issue. The proposed dataset EG-QA is not shared, the GPT versions are not specified, the fine-tuning objective is not sufficiently elaborated, etc.\\n\\n**Answer**: Thanks for your question! We will open-source our code, and dataset, EG-QA is currently available [here](https://anonymous.4open.science/r/EG-QA-C2B2). In line 798 (revision line 800), we mention that the version of GPT-4 is the latest GPT-4 turbo-0409. In addition, we highlight GPT-3.5 in the updated version as GPT-3.5 turbo-1106. \\n\\nIn the answer to Comment2, we highlight the purpose of fine-tuning. Besides, open-source LLMs often have weaker reasoning and instruction-following abilities compared to closed-source LLMs. Fine-tuning aims to enhance these reasoning skills while focusing less on strong reminder information, as demonstrated in Appendix B.3.\\n\\n---\"}",
"{\"title\": \"Response to Reviewer JdDF\", \"comment\": \"We are grateful for your thoughtful feedback on our paper and happy to learn that you find our research detailed and the insights we provide interesting! We will address your concerns as follows.\\n\\n---\\n\\n**Comment1**: The paper did not mention explicitly in the main method section about what are \\\"role-guided\\\", \\\"self-reflection\\\".\\n\\n**Answer**: We are sorry for any confusion on the terms \\\"role-guided\\\" and \\\"self-reflection.\\\" Specifically, in line 76-81, motivated by some research in Educational Psychology, when students are not confident in their own performance (not really know), teacher and peer guidance may lead students to give up independent and in-depth thinking. In this paper, we treat the LLM as a student, incorporate **role guidance** with self-reflection in the prompt, explore what information the LLM depends on in several prompt settings, and *whether role guidance really shakes up the performance of LLM*.\\n\\nThen, in line 194-196, we introduce the **self-reflection** strategy, which involves three steps that prompt the LLM to reflect deeply on its response while *verifying whether it consists in the correct response*.\\n\\nWe prompt LLMs self-reflective in the second and third steps and incorporate role guidance in the third step. The relevant prompt is as follows:\\n\\n> step 1: Please read the questions and options carefully and give the most appropriate answers and confidence; \\n\\n> step 2: Please read the questions and options carefully, continue to think, **reflect on the answer of step 1**, give the most appropriate answer and confidence;\\n\\n> step 3: My teacher thinks the answer is \\\\{Truth\\\\}. Please read the questions and options carefully, continue to think, **reflect on the answer of step 2**, and give the most appropriate answer and confidence.\\n\\nMore prompt settings in step-3 can be found in Table 7.\\n\\n---\\n\\n**Comment2**: What is \\\"conf\\\" in Table 2 and Table 3. This is not a typical metric and the authors should explain why it is important.\\n\\n**Answer**: Thanks for your pertinent question! \\\"conf\\\" refers to the verbalized **conf**idence in the paper. Specifically, since the log-probabilities of LLMs represent uncertainty over tokens (ways of expressing a claim) and not epistemic uncertainty over claims themselves, verbalized confidence is proposed by openAI [1] to elicit confidence in LLM and estimate LLM's confidence in their responses. We employ verbalized confidence to assess the model\\u2019s self-knowledge (really know) which is discussed in line 99, 147-152, and 159-160. \\n\\nWe employ the prompt to ask LLM to output the confidence score at each step such as \\\"give the most appropriate answer and confidence\\\". It can be represented in percentage terms or using explicit descriptors like \\\"high\\\". In Appendix A.2, we mentioned: \\n\\n> In the statistics of experiments, since there are few cases of non-numerical confidence levels and it is difficult to quantify, we compute on numerical confidence levels.\\n\\nBy integrating accuracy with confidence scores, we can better assess the model\\u2019s self-awareness of its knowledge and enhance the calibration ability of LLMs.\\n\\n[1] Lin S, Hilton J, Evans O. Teaching Models to Express Their Uncertainty in Words. Transactions on Machine Learning Research, 2022.\\n\\n---\\n\\n**Comment3**: Could you explicitly explain the \\\"Role\\\", \\\"Rem\\\", \\\"Cue\\\", \\\"conf\\\", \\\"com\\\" appearing in the experiment result table?\\n\\n**Answer**: Thanks for your valuable suggestion! The explanations for \\\"conf\\\" and \\\"acc\\\" are supplemented on the captions of Tables, which can be found in line 325-329 of the updated version. In the answer to Comment2, we explain the concept of \\\"conf\\\" (verbalized confidence). Here we detailedly answer the reviewer's question to explain the meaning of \\\"Role\\\", \\\"Rem\\\", \\\"Cue\\\", \\\"com\\\":\\n\\n\\\"Role\\\", \\\"Rem\\\", \\\"Cue\\\" are important elements in role guidance prompts. Specifically, except that no role, **\\\"Role\\\"** could be \\u201cteacher\\u201d or \\u201cclassmate\\u201d in educational scenarios, and also could be \\\"Judge\\\" or \\\"lawyer\\\" in legal scenarios. **\\\"Rem\\\"** is the abbreviation for \\\"**Rem**inder\\\". In RoSe strategy, the strong reminder is \\\"answer is\\\". **\\\"Cue\\\"** information represents the answer corresponding to the question, which could be the ground-truth or random answer (could be found in line 84-86). \\n\\n\\n**$com$** is a new metric defined as the comprehensive completion degree in line 467-469 (revision line 470-473). Since open-source base LLMs usually cannot give a definite answer in step-1 and step-2 during experiments, exhibiting task avoidance [1]. To make fair comparisons, considering accuracy x and completion degree $C$ ($C$ refers to the proportion of LLM that gives the exact answer), we adopt the variant of F1-scores as evaluation on $com$: $2 \\\\times \\\\frac{A \\\\times C}{A+C}$.\\n\\n[1] Zhou L, Schellaert W, Mart\\u00ednez-Plumed F, et al. Larger and more instructable language models become less reliable. Nature, 2024.\\n\\nWe will improve our work based on all the constructive comments.\"}",
"{\"title\": \"Response to Reviewer nbt9 (1/2)\", \"comment\": \"We thank the reviewer for the thoughtful and detailed comments. We are pleased that the reviewer considers our convincing motivation and refreshing findings! We appreciate the opportunity to address the concerns here.\\n\\n---\\n\\n**Comment1&5**: what is exactly the verbalized confidence?\\n\\n**Answer**: We are sorry that the concept of \\u201cverbalized confidence\\u201d confused you. Based on the concept of Verbalized Calibration [1] first introduced by OpenAI, they find: since the log-probabilities of models like GPT-3 represent uncertainty over tokens (ways of expressing a claim) and not epistemic uncertainty over claims themselves, GPT-3 can learn to express calibrated uncertainty using words (\\\"verbalized probability\\\"), i.e. express uncertainty in the language (\\\"61%\\\" or \\\"medium confidence\\\").\\n\\nFor verbalized confidence, we note that humans are able to verbalize their uncertainty, e.g., giving insight as to whether our answers and reasonings are correct or not. It is essential for LLMs to have the ability to know what they know rather than solely relying on data statistics. In Verbalized Confidence of Section 2, we introduce some related work in detail. Specifically, recent works on verbalized confidence aim that a trustworthy real-world prediction system should produce well-calibrated confidence scores. In our paper, we employ verbalized confidence to solve two problems:\\n\\n- Perform a more comprehensive evaluation, verbalized confidence helps us recognize the extent to which LLM knows the problem on its own, as we mentioned in line 45-48.\\n\\n- Verbalized confidence helps obtain high-quality data to fine-tune open-source LLMs, as we mentioned in line 237-238 (revision line 233-234).\\n\\n[1] Lin S, Hilton J, Evans O. Teaching Models to Express Their Uncertainty in Words. Transactions on Machine Learning Research, 2022.\\n\\n---\\n\\n**Comment2&7**: Whether the EG-QA dataset suffer from data contamination needs serious examination.\\n\\n\\n**Answer**: Thanks for the insightful question! As we mentioned in Section 5.1, the EG-QA dataset primarily consists of standardized examination questions collected from [website](http://www.zxxk.com/), which is an authoritative educational resources website, designed for Chinese teachers and students to provide teaching resources, learning resources, examination papers, teaching courseware and other text materials. The data in EG-QA include questions, standard answers, and knowledge points, requiring no additional manual annotation. The data originates from after **December 2023**, ensuring high quality and no data contamination issues. EG-QA is currently available [here](https://anonymous.4open.science/r/EG-QA-C2B2). \\n\\n---\\n\\n**Comment3**: There could be various other factors that have impact on the confidence (pre-trainining data, SFT data, preference data). Massive amount of studies on these factors might be needed to compose a \\\"comprehensive\\\" study.\\n\\n**Answer**: Thanks for your valuable question! Evaluating LLMs is inherently complex, with varying motivations, goals, and methods. Therefore, we evaluate LLMs from the perspective of **comparative study** through role guidance and self-reflection strategy. In addition to the factors at the data level mentioned by the reviewer, LLMs are also affected by model gradient training. In this paper, we obtain some findings both at the data level and model training level:\\n\\n- At the data level, LLMs are alignment with human preferences for safety and trust may make LLMs more attentive to human concerns in the SFT stage. Despite extensive efforts to reduce bias in SFT data (preference data), our work shows that LLMs are still prone to trust authoritative roles (in RQ3 of Section 5.4.1). Data-level bias is still pervasive, it is significant to mitigate the hidden biases in LLMs. As shown in Tables 4, 5, 6, the double-calibrated strategy proposed to fine-tune open-source LLMs can mitigate the bias on authoritative roles caused by SFT data.\\n\\n- At the training level, the gradient training can lead models to find shortcuts. Since given strongly-correlated and fast-to-learn features in training data, gradient descent is biased toward learning them first [1]. As we illustrate in RQ2 of Section 5.4.1: LLMs tend to capture shortcuts by relying solely on strong reminder \\\"answer is\\\" in prompts to quickly find the answer rather than understanding genuine relationships between prompt and truth during training. \\n\\n[1] Pezeshki M, Kaba O, Bengio Y, et al. Gradient starvation: A learning proclivity in neural networks. Advances in Neural Information Processing Systems, 2021.\"}",
"{\"summary\": \"This paper focuses on testing and boosting the model\\u2019s self-knowledge. its ability to tell the difference between what it truly understands and what it\\u2019s guessing from training data, rather than just following prompts or role guidance.\\nThey\\u2019re using different authoritative roles, like teacher or judge, to see how the model responds in each role, but the goal isn\\u2019t to pick one set role for guiding it permanently.\\n\\nSo, the aim is to check if the model falls for misleading cues, especially when it doesn\\u2019t actually know something. By introducing these authoritative roles, the researchers can see if the model just goes along with what it\\u2019s told. \\nThis lets them understand how the model behaves in different scenarios and figure out the kinds of guidance that might encourage more independent thinking.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors implement role guidance by assigning roles, like \\\"teacher\\\" or \\\"judge,\\\" to help the model think in ways that better align with real-world reasoning patterns.\", \"Adding a self-reflection step enables the model to review its responses, which enhances accuracy and reliability while exploring its self-knowledge.\", \"The paper\\u2019s double-calibration strategy combines role guidance with self-reflection, adjusting prompts and roles in iterative steps to reduce susceptibility to misleading information and improve answer stability.\", \"This approach also offers finer control during fine-tuning, helping the model handle uncertain information without relying solely on intuition or single-step decisions.\", \"The authors emphasize model self-knowledge, designing experiments to observe its confidence levels under different conditions. This focus helps develop models that are both accurate and capable of self-assessment, supporting more robust, real-world applications.\"], \"weaknesses\": [\"This work mostly used random answers to mislead the model, but they didn\\u2019t explain in detail how these answers were generated to ensure they\\u2019re diverse and realistic. If the random answers are too simple or repetitive, they may not truly test how well the model can handle more challenging misleading cues.\", \"The misleading information in the tests was mostly straightforward or basic incorrect answers. But in real-world scenarios, misleading information is often subtler or harder to detect. This setup might not fully capture the challenges the model would face in real-life situations.\", \"The model\\u2019s self-knowledge is mainly judged by its confidence levels and accuracy. These indicators alone might not be enough to fully capture how well the model truly understands its answers.\", \"Roles like \\u201cjudge\\u201d often require objectivity and caution, which might make the model more conservative in its responses. This cautious approach could limit the model\\u2019s effectiveness, especially in tasks that require flexible reasoning or hypothesis testing.\"], \"it_may_be_helpful_to_reference_the_following_papers_and_incorporate_a_discussion\": \"- Xie, Jian, et al. \\\"Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts.\\\" The Twelfth International Conference on Learning Representations.\\n\\nIt looks like that paper (Jian, et al.) also digs into situations where LLMs can be misled. Could the authors add some extra insights by comparing those findings with their own experiments here? \\nThe experimental results in this paper feel a bit limited when it comes to offering new perspectives.\\n\\n- Chan, Chi-Min, et al. \\\"ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate.\\\" The Twelfth International Conference on Learning Representations.\", \"questions\": [\"Are the randomly generated answers diverse and realistic enough to really test the model\\u2019s ability to handle complex misleading situations?\", \"Can the misleading info in the experiment truly reflect the subtle or hidden misdirections found in real-world scenarios to fully test the model's response?\", \"By relying just on confidence levels and accuracy to measure the model's self-awareness, are we capturing the full depth of its understanding?\", \"And could roles like a \\\"judge\\\" make the model more conservative, possibly affecting its performance on tasks that need flexible reasoning or hypothesis testing?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper proposes RoSe, a strategy that uses role guidance and self-reflection in prompts to evaluate whether LLMs know what it knows. They use a double calibrated strategy to find well-calibrated data to be used for fine-tuning LLMs. They study four research questions and found some interesting observations. For example, LLMs are highly sensitive to strong reminder information in prompts, such as \\\"the answer is\\\". In addition, role guidance can reduce the issue of overconfidence of LLMs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The idea of using roles like \\\"teacher\\\", \\\"student\\\" and \\\"classmate\\\" is interesting.\", \"The authors provide a lot of details for reproducing the experiments, such as prompts for each step and experiment results under different settings.\", \"The findings of the paper are quite interesting but not surprising. For example, LLMs may be confused by wrong guidance, tend to capture information from shortcuts, and their overconfidence can be mitigated by role guidance.\"], \"weaknesses\": [\"The writing of the paper is a bit unclear. The paper did not mention explicitly in the main method section about what are \\\"role-guided\\\", \\\"self-reflection\\\", and they only use a figure in the introduction to show what the prompt looks like.\", \"The author did not explain what is \\\"conf\\\" in Table 2 and Table 3. This is not a typical metric and the authors should explain why it is important.\"], \"questions\": [\"Could you explicitly explain the \\\"Role\\\", \\\"Rem\\\", \\\"Cue\\\", \\\"conf\\\", \\\"com\\\" appearing in the experiment result table?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Thanks for your explanation. It has addressed my concerns.\"}",
"{\"title\": \"Response to Reviewer fznJ (4/4)\", \"comment\": \"**C6.** Please include calibration plots and ECE to justify your statement.\\n\\n**Answer**: Thanks for your constructive suggestion! The calibration plots of GPT-4 turbo and ECE scores of LLaMA3-8B and Spark-13B are supplemented in **Appendix B.8** (highlighted in green). \\n\\nConsidering your constructive suggestion, to better compare the calibration abilities of LLMs in the evaluation and after fine-tuning, we employ calibration plots and ECE scores to perform evaluations on GPT-4 turbo and fine-tuned LLaMA3-8B, Spark-13B. \\nFirst, the calibration plots of GPT-4 are shown in Figure 9. Overall, **the confidence levels of the LLM are high**, typically above 60%, with most values falling to the lower right side of the perfectly calibrated line. As shown in the top row, under various RoSe strategies, GPT-4 demonstrates good calibration performance at step-1, but its performance declines at step-3 under role guidance. Meanwhile, the calibration performance of the LLM at step-3 is worse under random answer guidance, which is consistent with the findings of RQ1 in Section 5.4.1.\\n\\nFurthermore, as shown in the bottom row of Figure 9, compared with the model calibration performance under the RoSe strategy, **LLM exhibits poorer ability without role guidance, which aligns with the findings in RQ3 of Section 5.4.1.**\\nLLMs tend to capture shortcuts by relying solely on strong reminder in prompts to quickly find the answer, role guidance can reduce the over-reliance of LLMs on reminders to a certain extent.\\n\\n\\nThen, we employ Expected Calibration Error (ECE) by comparing the confidence scores with the actual accuracy of the predictions for evaluation. As shown in Figures 10 and 11, **the fine-tuned LLaMA3-8B and Spark-13B generally have better calibration abilities than base LLMs**. The calibration abilities of LLMs at step-2 are worse than the other two steps, indicating that LLMs modify their correct answers at step-1 and improve their confidence scores during the self-reflection process, leading to their higher ECE scores. This also demonstrates the significance of evaluating whether LLMs know what they know in Section 5.4.1, where LLMs fail to adhere to their correct answer, exhibiting uncertainty but accompanied by rising self-confidence. \\n\\nMoreover, under the guidance of classmate and random cue information, the fine-tuned LLMs exhibit poor calibration ability compared with base LLMs at step-1 and step-2, there appears to be a discrepancy between the models' high confidence levels and their actual accuracy. After two steps of reflection and adjustment, there is a notable enhancement in the models' performance at step-3, which suggests the alignment of increased accuracy and confidence levels.\\n\\n\\n---\\n\\n**C7.** Thanks for the explanation! It would be better if you could put it into the article as the original statement is somewhat confusing.\\n\\n**Answer**: Thanks for your insightful question! we have supplemented the explanation in **Appendix A.2** (highlighted in green).\\n\\n---\\n\\nThank reviewer fznJ again for engaging with our work thoughtfully and constructively, and we are grateful for any additional feedback and suggestions.\"}",
"{\"comment\": \"We greatly appreciate your positive comments and acknowledgment of our paper! We have carefully incorporated these clarifications and further improve the quality of our paper.\"}",
"{\"comment\": \"Thanks for the author's response. However, I still have some concerns:\\n\\nWhether the random answer generation can fully simulate misleading situations in the real world is still worth discussing.\\n\\nIn real-world applications, LLMs might face guidance from more diverse roles. It is still necessary to test and discuss whether such role diversity would affect the model's response to misleading information.\\n\\nAdditionally, misleading information in the real world might also involve multi-layered factors such as context and implication.\"}",
"{\"metareview\": \"This paper evaluates whether LLMs know what they know via a novel role-guidance combined with self reflection. The role guidance involves prompting LLMs using a specific role, such as \\\"Teacher\\\", \\\"Student\\\", \\\"Lawyer\\\", \\\"Judge\\\" and providing their answers and the self-reflection mechanism involves multi-step reflection each of which builds upon its preceding reasoning process. Through empirical observations on both answer accuracies and verbal confidences, interesting findings are made, such as \\\"LLMs are very sensitive to the strong reminder information\\\". In addition, the authors propose a double-calibrated strategy to further fine-tune open-sourced LLMs with highly calibrated data selected from the role-guided prompting. The results demonstrate the advantage of the additional fine-tuning.\", \"strengths\": [\"The idea of exploring LLMs' sensitivity and behaviors towards role-guidance and self-reflection with verbal confidence scores is interesting and concise.\", \"Comprehensive experiments have been conducted with detailed analysis made regarding LLMs' behavior, which could potentially be useful in designing better LLMs and inference strategies.\", \"A novel double-calibration strategy is proposed to enhance LLMs' performances according to the initia findings.\"], \"weaknesses\": [\"The conclusion that LLMs are sensitive to roles and random answers is not very surprising, given that several existing studies have made similar observations.\", \"The writing of this paper lacks clarity, especially when it comes to fine-tuning with the double-calibration strategy. A more detailed procedure on data collection and critiria of data selection should be incorporated.\", \"The analysis lacks diversity such as the types of misleading answers, different roles, prompts, etc.\"], \"additional_comments_on_reviewer_discussion\": [\"Almost all reviewers raised concerns about the clarity of the writing, making it hard to understand some of the procedures. Despite the authors' effort in providing further explanations, it seems the entire process is still not easy to understand and reproduce. I suggest the authors add more descriptions to demonstrate the entire process of fine-tuning and double-calibration.\", \"The reviewers raised questions regarding the diversity of the variations, such as misleading information, different roles, consistency measures. The authors provided additional experiments with roles including \\\"Judge\\\" and \\\"Lawyer\\\", changed answer indexes to textual answers, and incorporated consistency as measurement to strengthen their claims.\", \"Overall, the additional experiments and analysis provided by the authors are beneficial in enhancing the paper's contribution.\"]}",
"{\"title\": \"Thanks for your response\", \"comment\": [\"I would like to thank the authors for their responses. After carefully reading through your responses, the following concerns linger.\", \"C1. Thanks for your response, but still, the issue exists.\", \"C2. I appreciate your revision of the narration. It would be better if you could better illustrate the entire pipeline of the article, including the fine-tuning part.\", \"C3. I don't think the reason is convincing enough for not choosing more widely-used datasets such as BBH or MATH. According to your shared dataset, it seems that all questions are in English. I wonder why you claim the dataset to be bilingual. In addition, I'm not sure whether you have noticed that a lot of Unicode white space characters are used in your dataset (marked as red in the anonymous GitHub). Would it cause some trouble (not errors, but incorrect word separations) in tokenization?\", \"C4. I'm looking forward to seeing your code. It would be better if you could adapt your code to PyTorch and CUDA (I'm not sure whether PyTorch supports Ascend NPUs) for easier evaluation or adaptation of your method.\", \"C5. Then why p and q are removed from the conditional terms?\", \"C6. Please include calibration plots and ECE to justify your statement.\", \"C7. Thanks for the explanation! It would be better if you could put it into the article as the original statement is somewhat confusing.\", \"C8. Thanks for your explanation.\"]}",
"{\"comment\": \"We greatly appreciate your positive comments and acknowledgment of our paper! We will carefully incorporate these clarifications and further improve the quality of our paper.\"}",
"{\"title\": \"Response to Reviewer mKeV\", \"comment\": \"We sincerely appreciate your valuable time and feedback on our paper. Thanks for the questions and suggestions! We are committed to thoroughly addressing your concerns.\\n\\n**C1.** Whether the random answer generation can fully simulate misleading situations in the real world is still worth discussing. \\n&\\n**C3.** Additionally, misleading information in the real world might also involve multi-layered factors such as context and implication.\\n\\n**Answer**: Thanks for your questions! We fully agree that misleading is an important topic, and there is a specialized research field dedicated to solving this problem [1]. However, we would like to clarify that the focus of the paper is not misleading LLMs, nor does the word \\\"misleading information\\\" appear throughout the paper. **In this paper, our focus is on evaluating whether LLM really knows, beyond merely assessing their susceptibility to misleading information.**\\n\\n**[1]** Chen C, Shu K. Combating misinformation in the age of llms: Opportunities and challenges. AI Magazine, 2024, 45(3): 354-368.\\n\\nIn this paper, inspired by **Educational Psychology**, we treat LLM as a student to promote more research on evaluating and comparing LLMs with human behavior. The interesting findings on similarities and differences in LLMs compared to human behavior can be obtained by extensive evaluation experiments in **multiple-choice QA test suite**:\\n\\n- Similar to human behavior, LLMs are easily affected by role guidance when they don't really know. In addition, LLMs tend to trust the role of authority more when guided by different roles.\\n- Unlike human behavior, LLMs exhibit over-reliance on strong reminder information due to the gradient training and training/SFT data distribution.\\n\\nWe are glad you appreciate the paper better aligns with real-world reasoning patterns, supporting more robust, real-world applications (Reviewer mKeV). Moreover, other reviewers appreciate the convincing motivation (Reviewer nbt9), the refreshing and interesting findings (Reviewers nbt9, JdDF, fznJ), and the benefit of a wide spectrum of the NLP community (Reviewers nbt9, fznJ).\\n\\n---\\n\\n**C2.** In real-world applications, LLMs might face guidance from more diverse roles. It is still necessary to test and discuss whether such role diversity would affect the model's response to misleading information.\\n\\n\\n**Answer:** Thanks for your suggestions! Indeed, to gain deeper insights, we expanded our evaluation to include role guidance from additional roles such as **Judge** and **Lawyer**. We incorporated nine different settings using the open-source legal multiple-choice QA dataset (JEC-QA) to thoroughly evaluate the influence of these roles. The experimental results can be found in Table 3, with detailed findings discussed in Section 5.4.1.\\n\\n---\\n\\nThank reviewer mKeV again for engaging with our work thoughtfully and constructively, and we are grateful for any additional feedback and suggestions.\"}",
"{\"comment\": \"Thanks for your explanation. I've raised my score to positive.\"}",
"{\"comment\": \"We greatly appreciate your positive comments and acknowledgment of our paper! We will carefully incorporate these clarifications and further improve the quality of our paper.\"}",
"{\"comment\": \"Dear reviewer JdDF,\\n\\nAs the open discussion period draws to a close in a few days, we want to check back to see whether you have any remaining concerns. Thank reviewer JdDF again for engaging with our work thoughtfully and constructively. We have provided global responses for all reviewers to highlight several supplements to the paper. In addition, we also believe that we have sufficiently responded to your earlier queries on various aspects of this work, and we provide a short summary here for your convenience:\\n\\n\\n1. The detailed introduction of role-guided and self-reflection strategy.\\n2. The detailed introduction of metrics on verbalized confidence.\\n3. The explanations for \\\"conf\\\", \\\"acc\\\", \\\"role\\\", \\\"rem\\\", and \\\"cue\\\" have been supplemented on the captions of Tables.\\n\\nPlease let us know if/how we can address any remaining concerns, and we are grateful for any additional feedback and suggestions.\\n\\nBest,\\n\\nAuthors\"}",
"{\"title\": \"Response to Reviewer mKeV (3/3)\", \"comment\": \"**Comment5**: It may be helpful to reference the following papers and incorporate a discussion.\\n\\n> [1] Xie, Jian, et al. \\\"Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts.\\\" ICLR2024.\\n\\n> [2] Chan, Chi-Min, et al. \\\"ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate.\\\" ICLR2024.\\n\\n\\n**Answer**: Thanks for your critical suggestion! Actually, [2] mainly employs multi-agent debate for text evaluation tasks. Existing text evaluation tasks mainly rely on manual annotation, the paper employs multiple LLMs to evaluate the quality of text instead of evaluating the ability of LLM. The direction of our research is different. Then, refer to the paper [1], we will discuss it in Section 2 in the updated version. Specifically, we are different in four aspects: motivation, task, evaluation method and evaluation object as follows:\\n\\n\\n\\n- Motivation: [1] mainly studies the conflict between external knowledge and parameterized memory of LLMs in RAG scenarios. However, we mainly evaluate whether LLMs really know what they know and what they do not know to better ensure trustworthy in real-world scenarios.\\n\\n- Task: [1] makes evaluations on entity substitution QA (POPQA) and reasoning for providing True or False answers (STRATEGYQA). We mainly employ EG-QA and JEC-QA, which both focus on multiple-choice questions in educational scenarios, aligning with the motivation of the paper.\\n\\n- Method: [1] elicits the parametric memory of LLMs and generates coherent counterfactual constructs by substituting entities, further evaluating the model's acceptance of parametric and counterfactual knowledge.\\n\\n Our approach primarily introduces Role-guided and Self-reflection (RoSe) strategy in prompts to assess the model's ability to self-improve within an individual feedback and its \\\"know what they know\\\" capability. Furthermore, we employ a double-calibrated strategy to extract high-quality reasoning processes, which helps fine-tune open-source LLMs, enhancing their reasoning abilities and reducing their focus on strong reminder information.\\n\\n- Object: [1] focuses on parametric memory and filters out the inconsistent data of model answers through entailment checking and answer consistency, which is called unqualified examples. The data [1] chooses not to learn is the data that we focus on \\\"not really know\\\", that is, the model answer is changed from correct to incorrect after reflection or confused by role-guided information.\\n\\n\\nWe will improve our work based on all the constructive comments.\"}",
"{\"title\": \"Response to Reviewer nbt9 (2/2)\", \"comment\": \"**Comment4&6**: Section 4.2 could use some improvement. How to conduce the so-called \\\"double-calibration\\\"\\n\\n**Answer**: Thanks for your constructive suggestion! We are sorry for the confusion on \\\"double calibration\\\". Double calibration means that the model accuracy and confidence score are guaranteed simultaneously, which not only ensures the high quality but also the consistency of the reasoning process as illustrated in line 234-240 (in revision line 230-234), i.e. LLM is confident in its response and was not affected by the role guidance. Therefore, we propose double-calibrated strategy to obtain advanced reasoning data from closed-source LLM and fine-tune open-source LLMs to improve their reasoning capabilities. By leveraging the strong reasoning power of closed-source LLM, we can extract well-calibrated data without human annotations.\\n\\n---\\n\\n**Comment8**: What is step-3 in the experiment section?\\n\\n**Answer**: Thanks for the question! In this paper, we introduce the RoSe strategy to integrate role guidance and self-reflection, which contains three steps. In step-1, we prompt LLMs to output answers. In step-2, we prompt LLMs to self-reflect on the answer of the previous step and further answer the question. In step-3, we employ different role guidance to evaluate the performance of LLMs. The detailed prompt settings of step-3 can be found in Table 7, such as:\\n\\n>My teacher thinks the answer is \\\\{Truth\\\\}. Please read the questions and options carefully, continue to think, reflect on the answer of step 2, and give the most appropriate answer and confidence.\\n\\nWe will improve our work based on all the constructive comments.\"}",
"{\"title\": \"Response to Reviewer fznJ (1/4)\", \"comment\": \"We sincerely appreciate your valuable time and feedback on our paper. Thanks for your thoughtful feedback and suggestions on our paper! We are committed to thoroughly addressing your concerns.\\n\\n**C1.** Thanks for your response, but still, the issue exists.\\n\\n**Answer**: Regarding the comment *\\\"Although this article attends to more specific aspects of when and how LLM could fail, the demonstrated results are intuitive and may not deserve the discussion using a 10-page conference paper\\\"*, we are sorry that our response did not satisfy. We would like to explain main contributions of our paper mainly from the significance and structure of the paper combined with reviewers' evaluation to verify that the paper is worth 10-page research, and we have much more than 10 pages of discussion:\\n\\n1. Significance of the paper: \\n- **The refreshing and interesting findings (Reviewers nbt9, JdDF, fznJ)**. We first propose the RoSe strategy to explore the ability of LLMs to \\\"know what they know\\\", and reveal the local information (strong reminder) that the model relies on most. The RoSe strategy can mitigate the learning on shortcut-reminder. Meanwhile, we discover the potential trust of LLMs on the authority role.\\n\\n- **Benefit a wide spectrum of the NLP community (Reviewers nbt9, fznJ)**. As reviewer agrees, the findings in the paper can promote more discovery of LLMs in the NLP community. Our proposed double-calibrated strategy can effectively fine-tune open-source LLMs. Without dedicated human annotation, it can effectively auto-obtain high-quality CoT processes by combining RoSe strategy with confidence calibration, which effectively helps improve the logical reasoning and calibration abilities of open-source LLMs. The effectiveness of the strategy is fully verified on various open-source LLMs and datasets. Moreover, we construct the EG-QA dataset, which will be helpful to the related community that needs English multiple-choice QA suite.\\n\\n- **Better align with real-world reasoning patterns, supporting more robust, real-world applications (Reviewer mKeV) & Convincing motivation (Reviewer nbt9).** Inspired by research in Educational Psychology that students who don't really know are easily affected by teacher and peer guidance, we treat LLM as a student to promote more research on evaluating and comparing LLMs with human behavior. We find similarities and differences between LLMs and human behavior, and these findings can further help improve the performance of LLMs.\\n\\n\\n2. Structure of paper: the structure of this paper is mainly divided into two aspects, evaluation of closed-source LLMs and fine-tuning of open-source LLMs.\\n\\n- **In the evaluation**, we propose the RoSe strategy from the perspective of educational psychology, which helps us explore several behaviors of LLMs. The findings on similarities and differences in LLMs compared to human behavior can be obtained by extensive experiments:\\n\\n (1) Similar to human behavior, LLMs are easily affected by role guidance when they don't really know. In addition, LLMs tend to trust the role of authority more when guided by different roles.\\n\\n (2) Unlike human behavior, LLMs exhibit over-reliance on strong reminder information due to the gradient training and training/SFT data distribution.\\n\\n- **In the fine-tuning**, we propose double-calibrated strategy to extract well-calibrated data and help fine-tune open-source LLMs: \\n (1) The first calibration is to obtain data that remains accurate during the self-reflection and the role-guidance process, which is unaffected by role-guidance and has a progressive and high-quality reasoning process through self-reflection. \\n\\n (2) The second calibration is to obtain data that LLM \\\"really knows\\\" combined with confidence calibration. The model expresses confidence in its own answer in the reasoning process, without showing uncertainty affected by role guidance.\\n \\n Double-calibrated strategy helps automatically obtain high-quality reasoning process, accurate answers, consistent confidence scores, and effectively helps optimize model parameters in the fine-tuning process to improve open-source LLMs' reasoning ability and maintain self-reflection ability.\\n\\n---\"}",
"{\"comment\": \"Thank you for your response and clarification. With the updated information, I have revised my score.\"}",
"{\"summary\": \"This paper proposed RoSe, which is a set of strategies for assessing whether LLMs truly know the world knowledge, and how their confidence in their prediction could be affected when their answers are challenged by different roles. The authors also propose a double-calibrated strategy to fine-tune open-source LLMs so that they are more robust to local misleading information.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The studied question may be of great importance for the community to know the essence of LLMs and to develop better models. It is interesting to find how different types of guidance/challenges could affect the LLM results. The authors have made some effort to support their claim with experimental evidence.\", \"weaknesses\": [\"Many existing research articles studied the question of \\\"whether LLMs truly know what they know\\\". Although this article attends to more specific aspects of when and how LLM could fail, the demonstrated results are intuitive and may not deserve the discussion using a 10-page conference paper.\", \"The narration and illustration could use some improvement. For example, Figure 2 is not so informative in presenting the RoSe strategy or how the dataset for calibrated fine-tuning is constructed. There are significant redundancies within the first and second paragraphs in Section 4.2. The concept of \\\"well-calibrated data\\\" is not well-introduced and should be discussed in detail as it plays a key role in the fine-tuning process, etc.\", \"Some choices are not fully explained. For example, why the authors choose to do the main evaluation on the self-developed EG-QA dataset rather than other open-source datasets such as BBH, which also provides CoT chains in their answers.\", \"The reproducibility might be an issue. The proposed dataset EG-QA is not shared, the GPT versions are not specified, the fine-tuning objective is not sufficiently elaborated, etc.\"], \"questions\": [\"Edit 11/21: fix typos in the original comments.\", \"The narration in Lines 180--182 is confusing. What does it mean by we can obtain *, satisfying * based on logical consistency? Why the terms p and q are removed from $p(a,c|r)$? Does it mean the answer and confidence are generated only based on the reasoning chain, without seeing the original prompts?\", \"The results tables (2,3,4,5) show poor model calibration.\", \"How was the verbal confidence level such as \\\"very confident\\\" converted to scores?\", \"This paper is based on the assumption \\\"students who don\\u2019t really know are easily affected by the teacher and peer guidance\\\". Is there any evidence proving that this also holds for LLMs? This paper shows that LLMs are affected to different degrees by different types of guidance, but it does not directly build the link between \\\"not really know\\\" and \\\"easily affected\\\".\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": [\"**We sincerely appreciate the time and effort all reviewers made in evaluating our work!** We are also delighted that reviewers recognize the significance of our research question and the value of our findings:\", \"The refreshing and interesting findings. (Reviewers nbt9, JdDF, fznJ)\", \"Benefit a wide spectrum of the NLP community. (Reviewers nbt9, fznJ)\", \"Better align with real-world reasoning patterns, supporting more robust, real-world applications. (Reviewer mKeV)\", \"Convincing motivation. (Reviewer nbt9)\", \"Based on the reviewers' constructive suggestions, we have already made several changes to the paper (highlighted in green) and uploaded a new revision to the main text:\", \"The detailed explanation in the caption of Tables. (Reviewer JdDF)\", \"Correction in the double-calibrated strategy of Section 4.2. (Reviewers nbt9, fznJ)\", \"Calibration analysis of calibration plots and ECE scores in Appendix B.8. (Reviewer fznJ)\", \"Considering Reviewer mKeV's suggestions, we evaluate the internal consistency of the reasoning process, and LLMs under the RoSe strategy by subtle cue information, in Appendices B.6 and B.7 respectively.\", \"**We will continue to incorporate reviewers' feedback and improve the paper throughout the discussion period, and we look forward to further discussions!**\"]}",
"{\"title\": \"Response to Reviewer fznJ (1/4)\", \"comment\": \"We sincerely appreciate your valuable time and feedback on our paper and are pleased to know that you recognize the great importance of the community and our interesting findings! We are committed to thoroughly addressing your concerns.\\n\\n---\\n\\n**Comment1**: Many existing research articles studied the question of \\\"whether LLMs truly know what they know\\\". Although this article attends to more specific aspects of when and how LLM could fail, the demonstrated results are intuitive and may not deserve the discussion using a 10-page conference paper.\\n\\n**Answer**: Thanks for the thoughtful question! Although existing works proposed evaluation work from different aspects, we approach the evaluation in a more fine-grained, specific and novel way, the findings and conclusions we draw need work to prove with experimental results. Meanwhile, we propose a double-calibrated strategy to automatically obtain high-quality reasoning data and fine-tune open-source LLMs to improve their reasoning capabilities.\\n\\nWe reiterate our findings and work on evaluation and fine-tuning here. In the evaluations on closed-source LLMs, which can be found in Section 5.4.1: \\n\\n> 1. LLMs tend to capture shortcuts by relying solely on the strong reminder \\u201canswer is\\u201d in prompts to quickly find the answer rather than understanding genuine relationships between prompt and truth during training;\\n\\n> 2. Under the guidance of error information, LLMs fail to adhere to their own correct answer, exhibiting uncertainty on themselves;\\n\\n> 3. LLMs tend to trust the role of authority more;\\n\\n> 4. The overall confidence level of LLMs in settings with random cues is lower than that in settings with truth cues. & The overall confidence of the LLMs at step-3 decreases under the guidance of different roles compared to the no-role guidance, which is similar to student performance and reflects their uncertainty.\\n\\nIn the fine-tuning of open-source LLMs, which can be found in Section 5.4.2:\", \"through_the_conclusions_obtained_in_the_evaluation_stage\": \"LLMs rely less on strong reminder information under role guidance (line 377-411), while LLMs are able to enhance their reasoning process during self-reflection. We regard the data that reason correctly and gain confidence in the reasoning process as well-calibrated data, which can help enhance the reasoning ability of the open-source LLMs and alleviate their shortcut learning by finding answers through prompts. Extensive experiments on the ID, OOD sets of EG-QA and openBookQA datasets demonstrate the effectiveness of the strategy.\\n\\n---\\n\\n**Comment2**: The narration and illustration could use some improvement. For example, Figure 2 is not so informative in presenting the RoSe strategy or how the dataset for calibrated fine-tuning is constructed. There are significant redundancies within the first and second paragraphs in Section 4.2. The concept of \\\"well-calibrated data\\\" is not well-introduced and should be discussed in detail as it plays a key role in the fine-tuning process, etc.\\n\\n**Answer**: Thanks for the detailed suggestion! We have modified Section 4.2 in the updated version, and we mainly illustrate the RoSe strategy through the case in Figure 1, which contains the role guidance, and three-step reflection strategy using prompt settings. We also answer specifically to the reviewer's question here and explain them in the updated version.\\n\\n- In lines 194-197, the **RoSe strategy** is implemented through three main steps: In the first step, the LLM generates an initial response. In the second step, the model engages in reflective analysis of its previous answer. Finally, the model receives guidance from relevant roles, which offers a reference answer, enabling the model to further reflect on its prior response while integrating role guidance.\\n\\n- The main goal of **calibrated fine-tuning** is to obtain advanced reasoning data from closed-source LLM and fine-tune open-source LLMs to improve their reasoning capabilities. By leveraging the strong reasoning power of closed-source LLM, we extract well-calibrated data without human annotations. \\n\\n- In line 234-240 (in revision line 230-234), **well-calibrated data** refers to data that ensure both the accuracy and authenticity of the reasoning process (LLM truly knows how to solve it). It maintains the model's confidence throughout, preventing a significant drop in confidence levels at the final step compared to previous ones. This mitigates the effect of reminder in role guidance that could undermine the model\\u2019s certainty (in line 423-426/revision line 428-430). Such well-calibrated data ensures not only the high quality but also the consistency of the reasoning process.\\n\\n---\"}"
]
} |
E2c7UsrZnN | Spectral Operator Methods for Learning Coherent Temporal Representations in Cellular Signaling Dynamics | [
"Heman Shakeri",
"Ali Tavasoli",
"Behnaz Moradijamei"
] | We present a novel operator-based framework for learning coherent temporal representations of cellular dynamics from live-cell imaging data. Recognizing the inherent stochasticity and measurement limitations in biological systems, our approach shifts the focus from predicting exact trajectories to characterizing key dynamical properties that shape cellular behaviors at the population level. By leveraging spectral analysis of the Koopman operator and smoothing via Markov semigroups of kernel integral operators, we identify near-resonant patterns and transient coherent structures that persist across different experimental conditions. This methodology effectively captures fundamental dynamics, providing insights into mechanisms of heterogeneous cell responses without the need to model precise transformation laws. We demonstrate the efficacy of our framework on a dataset of retinal pigment epithelial cells with an inducible oncogene, revealing conserved dynamical patterns across varying levels of ERK inhibition. Our work offers interpretable learned representations, even with limited and noisy single-cell-resolved recordings, advancing machine learning for dynamical systems and opening new avenues for understanding and predicting cellular behavior in response to external stimuli. | [
"Operator theory",
"temporal representations",
"delay-coordinate embeddings",
"Markov operator",
"single-cell analysis",
"machine learning for dynamical systems"
] | Reject | https://openreview.net/pdf?id=E2c7UsrZnN | https://openreview.net/forum?id=E2c7UsrZnN | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"x5YyNQVqDD",
"uuhb8JxFoz",
"trulwag7oq",
"jPYr57OGCH",
"VB0016lHqk",
"PwAe5nunWN",
"8wz4YCMajE",
"7l23nmgsvZ"
],
"note_type": [
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment",
"decision",
"official_comment",
"meta_review"
],
"note_created": [
1729844344598,
1730738828276,
1732763675503,
1730668005343,
1732730991597,
1737524222745,
1732763903816,
1733626264881
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission12904/Reviewer_ruaa"
],
[
"ICLR.cc/2025/Conference/Submission12904/Reviewer_vYUm"
],
[
"ICLR.cc/2025/Conference/Submission12904/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12904/Reviewer_trL1"
],
[
"ICLR.cc/2025/Conference/Submission12904/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission12904/Authors"
],
[
"ICLR.cc/2025/Conference/Submission12904/Area_Chair_grVf"
]
],
"structured_content_str": [
"{\"summary\": \"The manuscript \\u2018SPECTRAL OPERATOR METHODS FOR LEARNING COHERENT TEMPORAL REPRESENTATIONS IN CELLULAR SIGNALING DYNAMICS\\u2019 presents a method based on spectral analysis of the Koopman operator, for learning representations of cellular dynamics (with a focus on response to perturbations) based on live-cell imaging, and specifically, learn dynamical properties, or features, instead of exact trajectories. The claim is that avoiding learning specific trajectories, as is the common approach, and instead going towards more general dynamical properties of these trajectories, can allow learning interpretable, robust representations for noisy data such as single-cell resolved datasets. The authors demonstrate their method by identifying conserved dynamical patterns in a dataset of retinal pigment epithelial cells with an inducible oncogene. Specifically, the respective experiments capture the dynamics of ERK activity and proliferation given varying levels of ERK inhibition, and the authors claim that while learning exact trajectories is infeasible in this scenario, they are able to capture conserved temporal patterns across different inhibition levels.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The idea to focus on predictions of characteristics or features of dynamical trajectories and not the dynamical trajectories themselves, is well grounded and timely for the type of biological data that the authors focus on.\", \"The operator-based dynamics framework proposed in this work is mathematically sound and conceptually compatible with the greater task at hand.\", \"The live cell imaging datasets that the authors demonstrate their approach over provide an interesting and challenging setting.\"], \"weaknesses\": [\"Some of the statements regarding the analysis of the live cell imaging data seem to be not supported properly (see the \\u2018Questions\\u2019 section below).\", \"Some analyses need to be extended to the entire dataset, and not made only over a single sample (see \\u2018Questions\\u2019 below).\", \"There is no limitations section or discussion of disadvantages/limitations of the approaches.\", \"There are no comparisons to alternative existing approaches for the analysis of such data.\"], \"questions\": [\"Comments:\", \"In the introduction, there is the sentence \\u2018Given these challenges, directly learning the transformation laws governing cellular dynamics is often impractical\\u2019, it could be useful to cover a few of the recent efforts in that direction.\", \"\\u2018Moreover, smoothing via Markov semigroups of kernel integral operators allows us to capture the stochastic nature of cellular processes in a tractable way, aligning with the idea that noise plays a functional role in cellular decision-making.\\u2019 - the second part needs explanation, it\\u2019s not clear how it follows the first part.\", \"The top panels of Fig 1 need some labels.\", \"The analysis of the Koopman modes, presented in Fig. 2b, should be better explained in the text and the corresponding statements should be better validated. For example, can you support your statement that the shift in the first mode reflects inhibition of ERK activity? Or why it reflects \\u2018a probabilistic shift\\u2019? Or what is the significance of the oscillatory behavior of the second mode? Can you support your hypothesis that it is related to the cell cycle for example? (and if that\\u2019s the case, then how is this \\u2018intrinsic cyclical dynamics within the ERK signaling network\\u2019?)\", \"Figure 3 is missing a statistical analysis over all cell trajectories, not just a visualizing of a single (\\u201ccherry-picked\\u201d?) cell.\", \"What is the rationale behind generalizing to unseen data that is under a different biological setting? Why do you expect good generalization in that case at all? This needs to be better explained.\", \"Discussion of limitations of the approach should be added.\", \"It seems that the conclusions throughout the paper should be phrased more modestly given the limited extent of analysis on real biological data.\", \"\\u2018Compared to approaches using functional data analysis or deep learning architectures, our framework offers significant advantages. It provides interpretable representations through Koopman eigenfunctions, corresponding to meaningful temporal patterns in the data, unlike black-box models.\\u2019 - But what are the potential disadvantages?\"], \"minor\": [\"\\u2018aligning with the idea\\u2019 - typo?\", \"\\u2018coherent temporal patterns\\u2019 - not defined\", \"The resolution/quality of some of the figures/panels should be improved\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors describe a dataset of cellular activity, and a spectral method for estimating its properties. They then demonstrate the application of this spectral method on this dataset.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Using cool spectral (and other) estimators on dynamical systems is a timely and important topic, and more work on this topic is desirable, since many datasets have these properties, and the standard AI toolkit is far less well-developed on this. Thus, novel methods development are welcome.\", \"weaknesses\": \"The one biggest weakness that I see is that this method is not quantitatively compared to anything. Neither is it shown to \\\"work\\\" on simulated data, nor on benchmark data, nor in theory. Here, I am open to \\\"work\\\" being defined in many ways, including improved accuracy, timing, interpretability, or even elegance. But I see literally zero numerical or analytic comparisons to any other method. Thus, I have no idea whether this is the most valuable advance in modeling dynamics since Kalman, or relatively useless, because other things work just as well.\", \"questions\": [\"Within the first few sentences, there were things I did not know/understand. Why is \\\"low molecule copy number\\\" a problem? What is a \\\"Koopman Operator\\\". \\\"Regulon\\\"? \\\"Isogenic\\\"? Please introduce any technical concepts that are not textbook \\\"AI\\\", so the reader can follow more easily. This includes \\\" smoothing via Markov semigroups of kernel integral operators\\\", for example.\", \"line 60: \\\"low in dimensionality, posing significant challenges for analysis.\\\" Why does low-dimensionality pose a challenge?\", \"I don't really understand what is new. Are the methods new, or just a new application of standard stuff? 4 pages are devoted to explaining them, and they are complicated.\", \"How is this stuff related to functional PCA, which is a fairly well established approach to modeling dynamics at scale. It seems highly related. I see no discussion of how other approaches might be able to solve this problem. I have a paper on something similar, https://www.sciencedirect.com/science/article/pii/S0167865516303671, which seems like maybe it would be applicable? If fPCA and RR-System Identification are not applicable to this problem, I'd want to understand why not. If so, I'd want to see benchmarks comparing this to something else.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer ruaa,\\n\\nThank you for your detailed and constructive feedback. \\n\\n---\\n\\n### **Literature on Direct Learning**\\n\\nIn the introduction, we now discuss recent efforts at direct learning of cellular dynamics, including mechanistic modeling approaches and hybrid methods combining data-driven and first-principles models. However, these approaches face significant challenges in fast scale cell perturbation processes due to the need for preserving high dimensionality and capturing the inherent stochasticity, which motivates our alternative approach.\\n\\n---\\n\\n### **Markov Smoothing and Cellular Noise**\\n\\nWe have clarified the connection between Markov smoothing and cellular noise.\", \"the_smoothed_operator_preserves_key_dynamical_features_while_enabling_practical_computation_through\": \"$$\\n\\\\mathcal{K}_\\\\tau = e^{\\\\tau(\\\\mathcal{L}-I)}\\n$$\\n\\nwhere $\\\\tau$ controls smoothing strength, and $\\\\mathcal{L}$ is the generator of the Markov semigroup. This regularization is crucial because it transforms the Koopman operator from having a continuous spectrum (which is challenging to approximate from finite data) to having a discrete spectrum, enabling reliable finite-rank approximation by making the smoothed operator compact.\\n\\nThis preserves the dynamically relevant eigenfunctions while filtering out unpredictable dynamics and noise and provides mathematical guarantees about the convergence of numerical approximations. Thus, the regularization essentially acts as a spectral filter that:\\n\\n$$\\n\\\\lambda \\\\mapsto e^{\\\\tau(\\\\lambda - 1)}\\n$$\\n\\nThis mapping compresses the essential spectrum while maintaining the point spectrum corresponding to coherent dynamical features. Consequently, we can reliably approximate the operator from finite data and extract meaningful dynamical patterns that persist across different experimental conditions.\\n\\n---\\n\\n### **Statistical Analysis**\", \"we_now_include_comprehensive_performance_metrics_across_all_cells\": [\"Root Mean Square Error (RMSE)\", \"Mean Absolute Error (MAE)\", \"Mean Absolute Percentage Error (MAPE)\", \"Coefficient of Determination (( R^2 ))\", \"Dynamic Time Warping (DTW)\", \"---\", \"### **Generalization Rationale**\", \"The biological basis for generalization stems from the conservation of core pathway architecture. Despite different experimental conditions, fundamental mechanisms of:\", \"Protein interaction networks\", \"Signaling cascade topology\", \"Regulatory feedback loops\", \"remain unchanged, enabling the prediction of shared dynamical features.\", \"---\", \"### **Advantages vs Disadvantages**\", \"While our method provides interpretable representations through Koopman eigenfunctions, it has trade-offs:\", \"#### **Advantages:**\", \"Theoretical guarantees\", \"Biological interpretability\", \"Robust generalization\", \"#### **Disadvantages:**\", \"Computational complexity of $O(N^2)$ (Although this is significantly reduced by the use of k nearest neighbor O(k_nnN))\", \"Memory requirements for kernel computations\", \"Sensitivity to sampling frequency (that is why we use variable bandwidth kernels)\", \"Please see the paragraph on \\\"Computational Considerations\\\"\", \"---\", \"### **Minor Points**\", \"Fixed typos, including \\\"aligning with.\\\" We rewrote this section.\", \"Defined \\\"coherent temporal patterns\\\" mathematically through eigenfunction persistence.\"], \"we_rewrote_this\": \"*\\\"...we focus on identifying coherent temporal patterns that persist for finite times\\u2014analogous to studying coherent structures in turbulent flows \\\\cite{Mezic2013Fluid}.\\\"*\\n\\n- Improved figure resolution using vector graphics. We vectorized the figures.\\n\\n---\\n\\nThese revisions are now reflected in the updated manuscript, nowwith enhanced clarity and rigor thanks to the reviewers comments.\"}",
"{\"summary\": \"The paper introduces a spectral operator-based framework applicable for learning cellular from live-cell imaging. By characterizing key properties of the dynamics that shape cellular behaviors at the population level the authors overcome challenges posed by this task. The approach uses the Koopman operator and Markov smoothing, providing biologically interpretable representations which can be used to identify properties of the system\\u2019s dynamics across varying external conditions. To demonstrate its biological relevance, the framework is applied to live-cell imaging data from retinal pigment epithelial (RPE) cells to study the dynamics in response to perturbations.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"*Originality*: This work presents a novel approach, using the established Koopman operator approach to extract interpretable representation of live-cell imaging data . Such representations can be valuable to uncover the underlying biological mechanisms captured in live-cell datasets.\", \"*Quality & clarity*: The paper is provides a thorough theoretical background of the suggested framework and experimental data setting.\", \"*Significance**: As motivated in the text, understanding the dynamics of cellular behavior is a core and challenging task; recovering decision making mechanisms in response to perturbations. In applying the approach on real world data the authors demonstrate its applicability and relevance for biological discovery.\"], \"weaknesses\": [\"While the presented framework seems promising for biological discovery this submission showcases preliminary work and lacks crucial components:\", \"*Contextualization to prior work*: Alongside the challenges in analyzing live-cell data, covered in the introduction, it is valuable to include an elaboration on existing approaches. Given that such is missing, it is challenging to accurately assess the contribution of this work.\", \"*Implementation details*: While a thorough theoretical description is presented, an implementation or pseudocode is missing, and is valuable for readers wishing to use the methods. Next, the authors briefly relate to the \\\"Computational considerations\\\"; claiming that the approach can handle large datasets efficiently. This claim is very vague and it is hard to judge the practical applicability of the framework.\", \"*Experimental results*: The actual analysis presented is very limited. Biological interpretability boils down to the analysis of two Koopman modes, and the reconstruction/prediction performance are only assessed visually (at poor resolution). Moreover, following the contextualization to prior work, reconstruction/prediction performance is not compared to alternative approaches.\", \"Please refer to the Questions section for practical suggestions in light of the above comments.\"], \"questions\": \"Following the weaknesses above could the authors relate to the following: \\\\\\n(1) include a \\\"related work section\\\". This section should discuss existing approaches for analyzing live-cell imaging data, and explicitly state how the present3ed method compares to or improves upon these approaches. This would help grasp the novelty and significance of the proposed framework.; \\\\\\n(2) provide an implementation/pseudocode; \\\\\\n(3) present an efficiency analysis quantifying the statement \\\"This approach allows us to handle large datasets efficiently\\\" (Lines 341-342) \\\\\\n(4) extend the experimental analysis: \\\\\\n(4.a) provide additional biological insights (on the studied/additional data); namely, presenting a more extensive downstream analysis of the Koopman modes demonstrating their potential for dynamics understanding. \\\\\\n(4.b) assess the prediction performance qualitatively and compare it to existing baselines. \\\\\\n(4.c) *minor* improve the figures quality.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"To vYUm.\", \"comment\": \"Dear Reviewer vYUm04,\\n\\nThank you for your thoughtful and constructive feedback as it is evident that you are rooted in the same area and we appreciate this. We addressed your concerns in the following and within the word count limit: \\n1. Quantitative Method Comparison:\", \"we_have_conducted_extensive_comparisons_with_three_established_methods\": \"- CODEX (A neural network framework that is recently used for cellular dynamics analysis by Jacques et al. 2021) - fPCA (functional Principal Component Analysis. Used for live cell data by Sampattavanich et al. 2018) - PLDS (RR Probabilistic Linear Dynamical Systems, as implemented by Chen et al. 2017 in Mr. SID) Our method outperforms these approaches. We include the results in the paper. One key advantage of our approach over PLDS lies in handling nonlinear dynamics. While PLDS uses the linear model:\\n$$x_{n+1} = Ax_n$$ \\nwhere $A \\\\in \\\\mathbb{R}^{d \\\\times d}$ is constrained to be stable, our method employs Koopman eigenfunctions $\\\\phi_i$ that satisfy:\\n$$\\\\mathcal{K}\\\\phi_i = \\\\lambda_i \\\\phi_i$$ \\nThis allows us to capture complex nonlinear dynamics while maintaining mathematical tractability. \\n\\n2. Technical Terms Clarification We have carefully defined and explained all technical concepts: Low molecule copy number: When the number of molecules $N$ is small, the standard deviation relative to the mean scales as $1/\\\\sqrt{N}$, making stochastic effects dominant (as opposed to Thermodynamic limit).\", \"koopman_operator\": \"We rewrite the entire section 2, to be more accessible for a broader ML community. In short, for a dynamical system with state $x$, the Koopman operator $\\\\mathcal{K}$ acts on observables $g$ as: $$(\\\\mathcal{K}g)(x) = g(F(x))$$ where $F$ is the state transition function. This transforms nonlinear dynamics into linear evolution of observables.\", \"we_describe_regulon_in_the_intro_section\": \"A set of genes under common regulatory control.\\n\\nMethod Novelty Our key innovation is to offer an interpretable representation for fast scale subcellular dynamics following perturbation using combining Koopman analysis with Markov smoothing that is able to retain its value in completely new experiments. The smoothed operator preserves key dynamical features while enabling practical computation through: \\n$$\\\\mathcal{K}_\\\\tau = e^{\\\\tau(\\\\mathcal{L}-I)}$$\\nwhere $\\\\tau$ controls smoothing strength and $\\\\mathcal{L}$ is the generator of the Markov semigroup. This regularization is crucial because it transforms the Koopman operator from having continuous spectrum (which is challenging to approximate from finite data) to having discrete spectrum, and it enables reliable finite-rank approximation by making the smoothed operator compact. This preserves the dynamically relevant eigenfunctions while filtering out noise and provides mathematical guarantees about the convergence of numerical approximations. Thus the regularization essentially acts as a spectral filter that: $$\\\\lambda \\\\mapsto e^{\\\\tau(\\\\lambda-1)}$$ This mapping compresses the essential spectrum while maintaining the point spectrum corresponding to coherent dynamical features. Consequently, we can reliably approximate the operator from finite data and extract meaningful dynamical patterns that persist across different experimental conditions.\", \"this_approach_differs_fundamentally_from_fpca_which_decomposes_data_as\": \"$$x(t) = \\\\mu(t) + \\\\sum_{k=1}^K c_k \\\\phi_k(t)$$\", \"our_method_instead_captures_intrinsic_dynamical_modes_that_evolve_as\": \"$$\\\\phi(x_n) = \\\\lambda^n \\\\phi(x_0)$$. We illustrated this in the results section. Note that fPCA is explaining the observed variation in functional data and it is unable to make prediction for time outside of the observed duration (thus we could not include it in the model comparisons).\\n\\nReduced-rank (RR) system identification methods like PLDS approximate dynamics through:\\n$$x_{n+1} = Ax_n$$\\nwhere $A \\\\in \\\\mathbb{R}^{d \\\\times d}$ is stable. This introduces fundamental limitations through two reductions:\\n- State space dimension reduction\\n- Nonlinear dynamics linearization\", \"these_limitations_create_spectral_pollution_through\": [\"Finite rank truncation\", \"Restricted linear model optimization\", \"Forced stability constraints\", \"In contrast, our Koopman approach lifts dynamics to where nonlinear evolution becomes linear and approximates true eigenfunctions while preserves intermittent coherent structures and multi-scale interactions that RR methods average out. Our comparative results using the Mr. SID implementation demonstrate superior long-term prediction accuracy and pattern preservation in new datasets.\", \"These changes are now reflected in our revised manuscript through (will be uploaded with all changes by the end of Nov 27):\", \"New comparison sections and tables\", \"Being more accessible to broader ML community\", \"Improved figures illustrating key concepts\", \"Clearer technical explanations\"]}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Dear Reviewer trL1,\\n\\nThank you for your detailed and constructive feedback, which has helped strengthen our manuscript. We address each of your concerns:\\n\\n---\\n\\n### **1. Missing Context/Prior Work**\\n\\nWe have added a comprehensive literature review section that positions our work within existing approaches.\\n\\ni) We begin by discussing single-cell trajectory inference methods in developmental biology. Due to their focus on slow time scales, these methods are well-suited for use in reduced dimensions and simulation-based techniques.\\nii) Then we shift our focus to cellular dynamics modeling and explained why the heterogenious reponse of cells leads to current methods that remain phenomenological. In other words, while these provide valuable insights, they often fail to capture the rich nonlinear dynamics present in cellular systems. We discussed CODEX, a recent developments in live-cell data analysis, that employ deep learning architectures. \\niii) Next, we discussed, maybe the closest cousin to our method, reduced-rank (RR) system identification methods that attempt to approximate dynamics through stable linear systems. However, they often suffer from fundamental limitations due to forced dimensionality reduction and linearization, and more importantly spectral pollution and loss of important dynamical features, due to the truncated models. Additionally, they cannot capture fast time scales due to their constraint on stable dynamics.\\n\\nThen we introduce our approach and how it bridges these gaps by combining spectral analysis with Markov smoothing while maintaining mathematical rigor through operator-theoretic methods. This provides both interpretability and theoretical guarantees while capturing nonlinear dynamics.\\n\\n---\\n\\n### **2. Implementation Details**\\n\\nWe provide complete implementation details through Algorithm 1, which includes:\", \"for_computing_koopman_eigenfunctions\": [\"Time series embedding using delay coordinates\", \"Kernel matrix construction with adaptive bandwidth\", \"Markov operator normalization\", \"Eigendecomposition and mode selection\"], \"our_implementation_handles_large_datasets_efficiently_through\": [\"Sparse matrix representations for kernel and Markov operators\", \"\\\\( k \\\\)-nearest neighbor graph construction to limit memory usage\", \"Efficient eigenvalue computation using iterative methods\", \"Adaptive bandwidth selection based on local data density\", \"---\", \"### **3. Extended Experimental Analysis**\", \"We have significantly expanded our experimental validation by comparing the performance of our method on held-out data and unseen datasets.\"], \"testing_on_completely_new_datasets_shows_generalization\": \"- Application to different ERK inhibitor concentrations\\n- Validation on independent experimental replicates\\n- Assessment of prediction accuracy beyond training time frames\\n\\nWe include statistical metrics for the prediction performance across all cells.\\n\\n---\\n\\n### **4. Method Comparisons**\", \"we_provide_comprehensive_comparisons_with_state_of_the_art_methods\": [\"**CODEX (deep learning):**\", \"Advanced neural network architecture using convolutional neural network (CNN) layers that identify motifs in the cell trajectories\", \"**PLDS (probabilistic linear dynamical systems):**\", \"Performance metrics across all methods (defined as):\", \"Root Mean Square Error (RMSE)\", \"Mean Absolute Error (MAE)\", \"Mean Absolute Percentage Error (MAPE)\", \"Coefficient of Determination (\\\\( R^2 \\\\))\", \"Dynamic Time Warping (DTW)\", \"These comprehensive changes are now reflected in our revised manuscript, providing a complete framework for understanding, implementing, and validating our method while demonstrating its advantages over existing approaches.\"]}",
"{\"metareview\": \"This work presents a spectral based method, based on Koopman Operator theory, to develop a novel algorithm that can identify shared dynamical properties in cell signaling dynamics. The reviewers all considered the problem being addressed as significant and the approach as interesting and novel. The primary concerns with the original submission rested in the lack of thorough situating of the work in the context of past methods either conceptually or numerically. While the authors did attempt to address these concerns, they are extensive changes and significant enough to warrant a more careful editing and completion of experimental tests for a future submission.\", \"additional_comments_on_reviewer_discussion\": \"Sadly the reviewers did not engage post rebuttal, however I read through the author responses in making a final assessment.\"}"
]
} |
E2RyjrBMVZ | Quantifying Variance in Evaluation Benchmarks | [
"Lovish Madaan",
"Aaditya K Singh",
"Rylan Schaeffer",
"Andrew Poulton",
"Sanmi Koyejo",
"Pontus Stenetorp",
"Sharan Narang",
"Dieuwke Hupkes"
] | Evaluation benchmarks are the cornerstone of measuring capabilities of large language models (LLMs), as well as driving progress in said capabilities. Originally designed to make claims about capabilities (or lack thereof) in fully pretrained models, evaluation benchmarks are now also extensively used to decide between various training choices. Despite this widespread usage, we rarely quantify the variance in our evaluation benchmarks, which dictates whether differences in performance are meaningful. Here, we define and measure a range of metrics geared towards measuring variance in evaluation benchmarks, including seed variance across initialisations, and monotonicity during training. By studying a large number of models -- both openly available and pretrained from scratch -- we provide empirical estimates for a variety of variance metrics, with considerations and recommendations for practitioners. We also evaluate the utility and tradeoffs of continuous versus discrete performance measures and explore options for better understanding and reducing this variance. We find that simple changes, such as framing choice tasks (like MMLU) as completion tasks, can often reduce variance for smaller scale (∼7B) models, while more involved methods inspired from human testing literature (such as item analysis and item response theory) struggle to meaningfully reduce variance. Overall, our work provides insights into variance in evaluation benchmarks, suggests LM-specific techniques to reduce variance, and more generally encourages practitioners to carefully factor in variance when comparing models. | [
"Evaluations",
"Language Models",
"LLMs"
] | Reject | https://openreview.net/pdf?id=E2RyjrBMVZ | https://openreview.net/forum?id=E2RyjrBMVZ | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yrJk7lHfdG",
"uWOLqCWIrp",
"piadgKv8i4",
"n0gb7uyD0O",
"lsdKNfA8lY",
"l8Gxsyc7gD",
"kY0Wab6jrt",
"eInM3tWpYJ",
"e8C7AM1dGq",
"ZtfNcu90Ge",
"Ysof5sdJ68",
"XCH1Mw0vjE",
"VjHFypE16X",
"Vacn7oIcoH",
"Uywx5teyyE",
"IaQT1vFFau",
"C89i4E8Bo3",
"B3xG6C7KXv",
"AVerCSbpaU",
"4crRr936UJ"
],
"note_type": [
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"meta_review",
"official_comment",
"decision",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1732566536286,
1730652924516,
1732720855166,
1732562584697,
1733007581400,
1730713468264,
1732564308989,
1730406539978,
1732736038920,
1734620611514,
1732562558714,
1737523525307,
1730615448003,
1730815453808,
1732556645289,
1732503993254,
1732779883349,
1730603746305,
1732554041537,
1732554207092
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission2710/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2710/Reviewer_V3wm"
],
[
"ICLR.cc/2025/Conference/Submission2710/Reviewer_V3wm"
],
[
"ICLR.cc/2025/Conference/Submission2710/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2710/Reviewer_w3M8"
],
[
"ICLR.cc/2025/Conference/Submission2710/Reviewer_i8dM"
],
[
"ICLR.cc/2025/Conference/Submission2710/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2710/Reviewer_w3M8"
],
[
"ICLR.cc/2025/Conference/Submission2710/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2710/Area_Chair_DT9w"
],
[
"ICLR.cc/2025/Conference/Submission2710/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission2710/Reviewer_7Zuc"
],
[
"ICLR.cc/2025/Conference/Submission2710/Reviewer_tZvi"
],
[
"ICLR.cc/2025/Conference/Submission2710/Authors"
],
[
"~Clarence_Lee3"
],
[
"ICLR.cc/2025/Conference/Submission2710/Reviewer_7Zuc"
],
[
"ICLR.cc/2025/Conference/Submission2710/Reviewer_VsPx"
],
[
"ICLR.cc/2025/Conference/Submission2710/Authors"
],
[
"ICLR.cc/2025/Conference/Submission2710/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response to Reviewer VsPx\", \"comment\": \"We thank the reviewer for their comments and address the questions/feedback below:\\n\\n**(Part 1) Seed Variance and Its Impact on Benchmark Scores**\\n> Does seed variance contribute overwhelmingly to the overall variance which makes it a critical factor to study, beyond [1,2,Arena]? I would like a better distinction in the work for why studying seed variance specifically is important, given the existing literature on other variance sources.\\n\\nWe agree with the reviewer that the focus in this paper is on seed variance. But, this variance is quite an important factor for evaluations done during the early stages of pre-training. For example, experiments and ablations performed at different compute (FLOPs) budgets to build scaling laws, selecting the best data mixture and weights for pre-training, model architecture ablations. High variance during the early stages of pre-training and at smaller FLOPs budgets can lead to inaccurate selection of architecture/datamix leading to waste of significant amounts of computational resources for larger pre-training runs.\\n\\n\\n**(Part 2) Efficient Benchmarking in High-Variance Contexts**\\n> I doubt those papers claim their methods are intended to reduce or understand variance. Could the authors provide a citation for this claim? If no direct claims about variance reduction exist in the cited works, could the authors discuss in the work why they believe these methods should be evaluated in terms of variance reduction.\\n\\nThe tinyBenchmarks paper (https://arxiv.org/pdf/2402.00838) motivate the use of IRT methods for pre-training ablations in Sections 1 and 5, which in hindsight may not be effective due to increased variance (Figure 5 shows this clearly). Through our analysis, we want to highlight that any kind of efficient benchmarking (tinyBenchmarks: https://arxiv.org/pdf/2402.00838, Mixeval: https://arxiv.org/abs/2406.06565, SMART: https://arxiv.org/abs/2410.20245) is limited in its use for pre-training ablations, building scaling laws, etc. especially during the early stages of pre-training because they exhibit high variance and hence are not reliable to distinguish between the various experimental settings which will transfer to larger models as well.\\n\\n> While I agree with the authors that generally, subsampling might lead to critical issues in benchmarking \\u2013 the results presented seem underwhelming given I fully buy the claim of using continuous metrics to reduce variance. Specifically, the results in Table 7 indicate that Kendall\\u2019s tau still remains quite high and relatively stable. Do the authors believe this is damning evidence against efficient benchmarking methods?\\n\\nIn Table 7, the last two columns are the most relevant where the percentage change in flips can be up to 30.77%. This shows that the efficient benchmarking methods built using weaker models are not transferable to more capable models, and can lead to inaccurate estimation of the actual final scores on the full evaluation test set.\"}",
"{\"summary\": \"This paper aims to quantify the amount of variance prevalent in popular LLM evaluation benchmarks, mainly by varying the random seeds for model training. The paper makes an important point about how we currently consider benchmark scores to only be point estimates rather than considering several other different factors of stochasticity while doing model comparisons. The paper conducts several experiments to quantify this seed variance across different benchmarks, and provides practical guidance on what metrics and evaluations to use during pretraining to provide most signal.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper presents a timely and important topic of variance in evaluation benchmarks, that should be widely considered while reporting benchmark performance.\", \"The paper is cogently written and presents convincing demonstrations of the importance of considering variance in evaluations while doing model comparisons.\", \"The paper showcases results and cautions against using sample efficient benchmarking methods while doing model pretraining, since they are likely to provide a higher-variance signal during training.\"], \"weaknesses\": [\"The provided variance numbers in Tab. 1, while being important as a reference, cannot be used directly for making comparisons across different model scales or training durations since it is not clear how those numbers would change with those factors, and whether we\\u2019d expect larger or smaller deviations in performance.\", \"Some important empirical details are missing, for example, could you provide more details on how you compute the SNRs for both discrete and continuous? This is important since you are making the claim that Cont SNRs being larger than Disc SNRs suggests that we should shift towards continuous metrics for making model comparison decisions, and this claim can only be validated if the precise method of computing these SNRs is justified.\", \"Another important question that is critical for the takeaways of the paper: Is the unnormalised monotonocity the best metric here to capture the \\u201cstableness\\u201d of a benchmark? Shouldn\\u2019t that monotonicity be weighted by the monotonicity you would expect by chance? So something like a cohen\\u2019s kappa coefficient here seems more appropriate rather than just the direct unnormalised monotonicity. For eg, see the analysis in Geirhos et al, Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency. I appreciate that this might be hard to formalise since its unclear what the \\u201cmonotonoicty expected by chance is\\u201d but I believe this is worth at least a discussion point, and would like to hear the author\\u2019s thoughts on this. Also worth mentioning here that monotonicity was also explored for doing benchmark selection as good validation datasets for the fine-web training recipe. The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale.\", \"Suggestion on formatting the paper: the results sections in 3 and 4 have some very important insights / takeaways for practitioners to adopt for pretraining and actual evaluation. I think it would majorly improve the paper if these could be highlighted at the end of each subsection in bold, or even better, a small box signifying the key takeaway.\", \"There are other kinds of variance inducing factors that haven't been investigated. For example, how does task type affect this variance?\", \"How does number of shots and choice of shots affect variance? How does model size impact variance? Does it increase as we scale up the model? There should be a discussion added about the other variance inducing factors which haven't been considered in the current work, and it must be made clear in the paper that \\\"seed variance\\\" is the only type of variance being investigated in this work.\"], \"questions\": [\"I have quite a few questions that I think would improve the quality of the paper, and for my own clarifications:\", \"Why are the Disc Std in tab 2 and std in tab 1 different? Aren\\u2019t they computing the same metric over the same set of models? The means are exactly the same for the discrete metrics so I presume the stds should be too?\", \"In tab. 2 what decoding strategy do you use for the generative tasks? The difference between GSM8k (0.99) and HumanEval (0.21) seems quite high for a difference in log-likelihoods. Are these token-length normalised likelihoods or unnormalised likelihoods?\", \"Comment: Typo in fig 3 caption, should be item difficulty (y-axis) and item discriminability (x-axis).\", \"For the analysis in fig 3, how does the correlation between item discrimination b/w train and test look like for another randomly selected set of train and test models? The key question is what the variance in the correlation obtained for a random split of random train-test models would be? This would make the conclusion (that the low correlation between train and test item discriminability is due to the differences in model capability) more strong and robust.\", \"The \\u201cbest\\\" and \\u201cworst\\u201d models used for creating the train and test splits do not share the same training data mixtures right? So I would expect that the takeaways from fig 3, especially the ones involving the difficulty split are also confounded by different levels of data contamination with respect to the test sets?\", \"For the points in section 5, the main claims revolve around using the IRT benchmarks themselves. How much of that can be explained by the increased variance from just having a smaller number of test samples? i.e. how would the results look like if instead of the IRT test set, a random set of evaluation points of the same size as the IRT set were used for the analyses?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Comments on reviewer concerns and author rebuttal\", \"comment\": \"On the second point raised by reviewer tZvi and the authors' response: I disagree with the reviewer's assertion that the methods proposed in this paper along with the general idea of quantifying variance in the manner studied in the paper, are not useful. I completely align with the authors' rebuttal, especially on the following point:\\n\\n> If there's significant variance, it's hard to discriminate between the various experimental settings and one might end up selecting a datamix or architecture which hurts performance over the course of a full pre-training run, leading to significant waste in computational resources.\\n\\nAs a practitioner myself, I have faced this same exact issue multiple times and completely agree with the authors that a lack of systematic knowledge about the variance of particular evaluations can cause significant wastage in terms of compute and resources due to getting flawed signals---by making design choices based on high-variance evaluations and pursuing/discarding different methods based on these flawed signals, several promising methods could be wrongly discarded simply due to looking at the wrong evaluation metrics. In-fact, such approaches of only including low-variance evaluations have also been followed by recent foundation model training runs by [Huggingface](https://huggingfacefw-blogpost-fineweb-v1.static.hf.space/dist/index.html#ablations_and_evaluation_setup) and [DatologyAI](https://www.datologyai.com/post/productionized-multimodal-data-curation-at-the-billion-sample-scale). Hence, I believe quantifying the exact variance of different evaluations is highly important.\\n\\nGiven this evidence, I completely back the authors' view on this.\"}",
"{\"title\": \"Response to Reviewer w3M8 (continued)\", \"comment\": \"> Can authors show similar evidence for instruction tuned models? It seems to me that it is not expensive to instruction tune base models authors already have obtained, or at least use already available instruction tuned models. Alternatively, can authors present more benchs that do not heavily rely on instruction like templates?\\n\\n> Can authors present evidence that using continuous metrics they can do better on predicting either end model performance from early training phases or larger scale model performance from smaller scale pretraining?\\n\\nWe believe that we have answered both the questions raised by the reviewer above as part of addressing the weaknesses. We are happy to discuss any additional questions that the reviewer may have.\"}",
"{\"comment\": \"> Again, we would like to emphasize that we are performing evaluations using log-likelihood (details presented in Appendix A), in which the model is not doing any generations, and we limit the output space to the possible option letters/texts, and compute NLLs and choose the option with the lowest NLL as the model\\u2019s prediction.\\n\\nThanks for pointing out the mode of evals being throughout NLL. However, my point was that, independent of output eval, form of the problem formulation itself presented on the input is instruction template based (for standard MMLU), and therefore base model processing input in such format would struggle especially in early training phases, in contrast to MMLU cloze that is not relying on such formatting (as also described in Appendix B). I think that might be a confound when stating difference between MMLU standard and MMLU cloze that is apart from discrete vs continuous metrics. I guess one way to test it would be to have a discrete version that is not relying so heavily on specific instruction format to pose the problem on the input. I think same can be argued in general - to show difference discrete vs continuous, one should remove as far as possible confound of presenting problems in a instruction template form for both evals if testing with base models. \\n\\n> ... for building scaling laws, the FLOPs budget matters more as opposed to different model scales ...\\n\\nI cannot quite follow this point. While FLOP budget surely matters, we would like to extrapolate observed trends towards larger scales. This can be only done well if scanning through a scale span broad enough on smaller scales. I cant imagine how we can do it from few or even single point (no matter whether FLOP related, or combined model/data scale). I think it would be important to observe how variance behaves on a span up to some reasonable high FLOP value, including scan through model scales, as it might be that variance also behaves differently depending on model scale, and this can be overlooked if estimating from single model scale only. This is also in line with comment from other reviewers, eg 7Zuc https://openreview.net/forum?id=E2RyjrBMVZ¬eId=C89i4E8Bo3 I think it would be important to see whether on smaller model scales, that are of course also important for deriving scaling laws in practice, same advantage continuous vs discrete metrics can be stated. This might also allow a bonus to predict some properties of variance for higher FLOP/model scales without running experiments there, although I agree that 7B is reasonable upper threshold here. Smaller scales <7B are not so expensive, so not sure why would not it be a good thing to do.\\n\\nIn general, I see the merits of the study in attempt to clearly point out advantages of continuous metrics. However, I still find the write up hard to read. I also still struggle to grasp the relevancy of the item analysis and item response theory (IRT) part for the presented results. The work seems to be building up on https://arxiv.org/abs/2406.04391, and I wish for the same level of clarity achieved there to give a better score that is still not there.\"}",
"{\"summary\": \"This paper discusses the problem in current evaluation and reporting practices where performance may vary across different development choices. This variance is scoped for training initialization seed and monotonicity. The paper also discusses ways to reduce this variance, in particular for choice based benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper explores how to make evaluations more precise by reporting variance.\", \"Provides estimates for the expected variance across several benchmarks and models.\", \"An important finding is made on the unreliability of IRT-based methods for evaluation comparisons across models. This is very relevant for evaluation reporting.\"], \"weaknesses\": [\"The framing of \\u2018variance\\u2019 in the paper seems too broad. There are other possible kinds of variance worth exploring or mentioning.\", \"The title of the paper and general framing suggests a general focus on variance in evaluations, but the paper currently fails to contextualize two very distinct types of variance: training and inference. For example, the (training) seed variance discussed falls within training. Other possible sources of variance for each should be mentioned where possible. A basic example of the inference type can be found in prompt sampling.\", \"While studying the training seed variance is useful, this is really only feasible for smaller models, as it would be too expensive for larger models. This may reduce the utility of the results in large model comparisons.\", \"The paper could mention previous work such as Picard 2021 (https://arxiv.org/abs/2109.08203) on the impact of training initialization seed variance, or fine tuning seed variance [Dodge 2020 (https://arxiv.org/abs/2002.06305)]. And also extended discussions on how obscuring or not disclosing these variances can be harmful to the evaluation process (e.g. [Leech 2024 (https://arxiv.org/abs/2407.12220)]).\"], \"questions\": [\"Are there any other kinds of variance that could have been used for this study instead of initialization seed?\", \"Only training-based sources of variance are discussed, what about inference-based?\", \"Section 3.3 outlines a very interesting case of differences in evaluation results after a reformulation of the setup, could this be shown for other benchmarks? perhaps a similar pair?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer V3wm\", \"comment\": \"We thank the reviewer for their thoughtful comments and address the weaknesses and questions below:\\n\\n> Some important empirical details are missing, for example, could you provide more details on how you compute the SNRs for both discrete and continuous?\\n\\nWe compute SNR using the following formula, $\\\\frac{\\\\mu(\\\\mathcal{S}, \\\\mathbb{M}^{210B})}{\\\\sigma(\\\\mathcal{S}, \\\\mathbb{M}^{210B})}$. We\\u2019ve updated it in the revised version.\\n\\n> Another important question that is critical for the takeaways of the paper: Is the unnormalised monotonocity the best metric here to capture the \\u201cstableness\\u201d of a benchmark? Shouldn\\u2019t that monotonicity be weighted by the monotonicity you would expect by chance? So something like a cohen\\u2019s kappa coefficient here seems more appropriate rather than just the direct unnormalised monotonicity. For eg, see the analysis in Geirhos et al, Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency.\\n\\nIt\\u2019s very hard to quantify the monotonicity expected due to chance as it\\u2019s not evidently clear what mean value to take over the course of time. If we just treat each step independently, then the chance performance is the same across all steps, resulting in a monotonicity of 0. But obviously, it\\u2019s not independent and develops for each benchmark differently depending on the \\u201chidden\\u201d scaling laws for that benchmark. There\\u2019s a wealth of literature pointing out the defects with Cohen's kappa coefficient (for e.g. Appendix B of https://arxiv.org/abs/2406.12624 for a short summary).\\n\\nAs for Geirhos et al, Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency, can the reviewer expand on how the error consistency analysis can be used for monotonicity?\\n\\n> Suggestion on formatting the paper: the results sections in 3 and 4 have some very important insights / takeaways for practitioners to adopt for pretraining and actual evaluation. I think it would majorly improve the paper if these could be highlighted at the end of each subsection in bold, or even better, a small box signifying the key takeaway.\\n\\nWe agree with the reviewer and will update the manuscript accordingly.\\n\\n\\n> There are other kinds of variance inducing factors that haven't been investigated. For example, how does task type affect this variance? How does number of shots and choice of shots affect variance? How does model size impact variance?\\n\\nWe agree that there are other sources of variance as well, however, for pre-training ablations involving scaling laws and datamix selection, seed variance is an important factor of variance. We\\u2019ll add a discussion on other sources of variance while highlighting that this paper considers \\\"seed\\\" variance.\\n\\n> Why are the Disc Std in tab 2 and std in tab 1 different? Aren\\u2019t they computing the same metric over the same set of models? The means are exactly the same for the discrete metrics so I presume the stds should be too?\\n\\nFor the means computed in both Table 1 and 2, we use the final checkpoints across the different seed runs as defined in Lines 123-124. However, for the standard deviation in Table 1, we use the seed variance $\\\\sigma(\\\\mathcal{S}, \\\\mathbb{M})$ definition provided in Lines 126-133, taking into account intermediate checkpoints as well. For the standard deviation in Table 2, we use only the final checkpoints across seeds for an accurate assessment of the signal to noise ratio. That\\u2019s why we use Disc Std to distinguish from the seed variance $\\\\sigma(\\\\mathcal{S}, \\\\mathbb{M})$.\\n\\n> In tab. 2 what decoding strategy do you use for the generative tasks? The difference between GSM8k (0.99) and HumanEval (0.21) seems quite high for a difference in log-likelihoods. Are these token-length normalised likelihoods or unnormalised likelihoods?\\n\\nWe use greedy decoding (temperature $= 0$) for sampling and use the character-length normalized NLL for both GSM8k and HumanEval. The token-length normalized NLLs also exhibit similar differences as shown in Figure 8 (Section C.2) in the updated version of the paper.\\n\\n> Comment: Typo in fig 3 caption, should be item difficulty (y-axis) and item discriminability (x-axis)\\n\\nThanks for pointing it out, we have fixed it in the revised version of the paper.\"}",
"{\"summary\": \"Authors address the question of measuring variance in standardized evaluation benchmarks. They do so by constructing various ways to assess variance including variance due to different init seeds and looking at evaluation monotonicity at checkpoints in course of pre-training. Authors study a number of already pre-trained models and also train Llama like models from scratch on 7B scales, varying their init seed and obtaining intermediate checkpoints to use for variance estimation. Authors show evidence for continuous metrics delivering better signal-to-noise ratio across all benchmarks, also showing better monotonicity for continuous metrics than for discrete ones. Authors also look into techniques used in human performance testing like item analysis and item response theory and find those not useful for assessing model performance and improving SNR.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Question of which metrics used for benchmarking model capabilities are properly reflecting model performance and allow differentiation of models already in early training stage or on smaller scales in general, where performance signal is weak, is very important, as it also leads to better methods for scaling law derivation and proper model comparison via derived scaling laws.\", \"weaknesses\": [\"Claims put forward by authors are made by experimenting with base models only. Especially when taking benchmarks that have certain question formatting at their core - MMLU, ARC, HellaSwag - it seems to me a strong confound when attempting to make a statement about ability of a benchmark to measure model capabilities. Base models do not handle instruction format well, in general struggling with handling question-answer interaction. This is usually installed via instruction tuning, after which models adhere to various instruction like interactions. I think thus to make proper statements about effect of variance or in general about how to render benchmark signal useful for predictions about model quality, benchmarks that do not rely on certain instruction template are not a good choice when dealing with base models. In my opinion, when working with such benchmarks, authors should have instruction-tuned base models, conducting measurements after that. Benefit of continuous metrics might be thus due to base models not handling instruction like benchs and not due to actual benchmark content related to complexity of problems posed in it. Same might hold for the \\\"curious MMLU\\\" case presented by authors, where cloze variant shows smaller variance and better SNR than multiple choice format standard MMLU form - this might be again just due to base models not being able to handle problem formulation template.\", \"Further weakness in my opinion is that many of the claims are based on pre-training from scratch done only on one model scale of 7B. It might be more insightful to see trends across scales, even smaller ones. Experiments on larger scale are though expensive, even if going bit further to 13B.\", \"It is also not quite clear how the claim of better suitability of continuous metrics is backed up. It seems to me there is no clear evidence presented by authors that using those metrics eg indeed allows better prediction from earlier to later training stages or from smaller to larger scale pretraining.\"], \"questions\": \"Can authors show similar evidence for instruction tuned models? It seems to me that it is not expensive to instruction tune base models authors already have obtained, or at least use already available instruction tuned models. Alternatively, can authors present more benchs that do not heavily rely on instruction like templates?\\n\\nCan authors present evidence that using continuous metrics they can do better on predicting either end model performance from early training phases or larger scale model performance from smaller scale pretraining?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer V3wm (continued)\", \"comment\": \"> For the analysis in fig 3, how does the correlation between item discrimination b/w train and test look like for another randomly selected set of train and test models? The key question is what the variance in the correlation obtained for a random split of random train-test models would be? This would make the conclusion (that the low correlation between train and test item discriminability is due to the differences in model capability) more strong and robust.\\n\\nWe have updated the paper with additional results. In Figure 10, you can see that for five different sets of train/test splits (with 14 models each in the test sets), the results are robust and the regression line fits are similar with high correlation on item discrimination between the train and test models in the random splits. This enforces the claim of low correlation in the train and test item discriminability is actually due to differences in model capability.\\n\\n> The \\u201cbest\\\" and \\u201cworst\\u201d models used for creating the train and test splits do not share the same training data mixtures right? So I would expect that the takeaways from fig 3, especially the ones involving the difficulty split are also confounded by different levels of data contamination with respect to the test sets?\\n\\nYes, the reviewer is correct. Since many of these models are just open-weights, we don't know the level of contamination of test sets in their pre-training datamixes. Assuming that the folks releasing the models are fair players and that the performance difference for a given pair of models accurately reflects model capabilities, we believe our findings still hold.\\n\\n> For the points in section 5, the main claims revolve around using the IRT benchmarks themselves. How much of that can be explained by the increased variance from just having a smaller number of test samples? i.e. how would the results look like if instead of the IRT test set, a random set of evaluation points of the same size as the IRT set were used for the analyses?\\n\\nSince the optimization objective for IRT benchmarks is to preserve the mean performance and taking any random sample won't preserve it, we believe it's not an accurate comparison between the two cases, and the random sample would not give any kind of signal as provided in Figure 5. Our goal for section 5 is to highlight that any efficient benchmarking in general (and not just IRT) can be very noisy and lead to increased variance during the scaling laws/pre-training ablations, and not effective at all compared to the full evaluation.\"}",
"{\"metareview\": \"This work investigates the variance in LLM evaluation benchmarks, primarily looking at sources of variance that manifest during training rather than at test time. To this end, a set of 7B Llama-style models are trained and experiments are conducted to determine how the conclusions drawn from benchmarks can vary throughout the training process. Various factors are explored in the experiments, including whether discrete or continuous metrics are used, how choice tasks are framed, and whether techniques from human testing can reduce variance.\\n\\nThe reviewers mostly had a negative opinion of this paper, with the bulk of the criticism centred on the perceived significance of variance due to the choice of random seed/initialisation. Several reviewers (i.e., tZvi, i8dM, 7Zuc, and VsPx) suggested that it would be more worthwhile to investigate other sources of variance that manifest at inference time. I have a dissenting opinion here: the focus of this submission is on how variance in benchmarking *during pre-training* can impact subsequent modelling choices. This is in contrast to much of the other work in this area and the suggestions of the reviewers, who tend to focus on evaluating models that have already finished training. With that said, I still am not prepared to override the majority decision of the reviewers. As pointed out by reviewer w3M8 and V3wm, the submission in its current form does not allow for inferences to be made across model scales, which would be important for deriving scaling laws that are robust to evaluation variance during training.\", \"additional_comments_on_reviewer_discussion\": \"There was quite a bit of discussion related to which sources of variance are worthy of study, with the authors and several reviewers not coming to a consensus.\\n\\nI would also like to take this opportunity to agree with Clarence Lee and reviewer V3wm, that the main weakness identified by reviewer tZvi is not well-founded.\"}",
"{\"title\": \"Response to Reviewer w3M8\", \"comment\": \"We thank the reviewer for their time and address their feedback/questions below:\\n\\n> Base models do not handle instruction format well, in general struggling with handling question-answer interaction. This is usually installed via instruction tuning, after which models adhere to various instruction like interactions. I think thus to make proper statements about effect of variance or in general about how to render benchmark signal useful for predictions about model quality, benchmarks that do not rely on certain instruction template are not a good choice when dealing with base models.\\n\\nWe would like to point out that instruction-tuning has little effect when conducting evaluations using log-likelihood (https://huggingface.co/blog/open-llm-leaderboard-mmlu). In this setup, the possible choices/text completions are appended to the prefix prompt and the option with the lowest NLL is chosen as the model response. In our paper, this corresponds to AGIEval, ARC-C, COPA, Hellaswag, MMLU, PIQA, and SIQA. Moreover, Dubey et. al, https://arxiv.org/abs/2407.21783 (Figure 14) shows that the prompt variations and option orders are fairly robust and exhibit negligible variance in this setup of NLL-based evaluation. This is a standard setup for evaluating pre-trained base models as seen in GPT series (GPT-3: https://arxiv.org/abs/2005.14165, GPT-4: https://arxiv.org/abs/2303.08774), Llama series (Llama 1: https://arxiv.org/abs/2302.13971, Llama 2: https://arxiv.org/abs/2307.09288, and Llama-3 https://arxiv.org/abs/2407.21783) and other model releases like Mistral/Gemini as well.\\n\\n> In my opinion, when working with such benchmarks, authors should have instruction-tuned base models, conducting measurements after that. Benefit of continuous metrics might be thus due to base models not handling instruction like benchs and not due to actual benchmark content related to complexity of problems posed in it. \\n\\nSince we study the variance that is useful for building scaling laws/doing pre-training datamix ablations, it doesn\\u2019t make sense to do instruction fine-tuning for each FLOPs scale and at every pre-trained checkpoint. Instruction-tuning should only be done at the end and their evaluation setup is different from those of pre-trained models. Pre-trained evaluations are essential during the early stages of pre-training for selecting the best architecture/datamix that best fits the scaling law curves.\\n\\n> Same might hold for the \\\"curious MMLU\\\" case presented by authors, where cloze variant shows smaller variance and better SNR than multiple choice format standard MMLU form - this might be again just due to base models not being able to handle problem formulation template.\\n\\nAgain, we would like to emphasize that we are performing evaluations using log-likelihood (details presented in Appendix A), in which the model is not doing any generations, and we limit the output space to the possible option letters/texts, and compute NLLs and choose the option with the lowest NLL as the model\\u2019s prediction.\\n\\n> Further weakness in my opinion is that many of the claims are based on pre-training from scratch done only on one model scale of 7B. It might be more insightful to see trends across scales, even smaller ones. Experiments on larger scale are though expensive, even if going bit further to 13B.\\n\\nWe agree with the reviewer, however, we would like to point out that for building scaling laws, the FLOPs budget matters more as opposed to different model scales. We train 7B models for 210B tokens accounting for a FLOPs budget of $10^{22}$, which is a fairly standard budget for building scaling laws. For example, Dubey et al., 2024 (Llama 3: https://arxiv.org/pdf/2407.21783) and Hoffman et al., 2022 (Chinchilla: https://arxiv.org/pdf/2203.15556). We believe that the reference variance values are representative of this budget: Table 2 column 3 - Disc Std for $10^{22}$ budget, and Table 1 column 5 - $\\\\sigma(\\\\mathcal{S}, \\\\mathbb{M})$ for all budgets $\\\\leq 10^{22}$.\\n\\n> It is also not quite clear how the claim of better suitability of continuous metrics is backed up. It seems to me there is no clear evidence presented by authors that using those metrics eg indeed allows better prediction from earlier to later training stages or from smaller to larger scale pretraining.\\n\\nAll of the results in Sections 3.2 and 3.3 point to the utility of continuous metrics. Table 2 shows how continuous metrics have higher SNR compared to discrete metrics. Figures 1, 2, and 6 show how continuous metrics are better for predictability and stability in tracking as the seed variance is lower (represented by the box heights in the plots).\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper investigates the variance present in current evaluation benchmarks by repeating the training of models with different random seeds. It demonstrates the variance caused by random factors in benchmarks, providing valuable references for assessing model evaluation results, particularly for the MMLU evaluation of smaller-scale models. The paper attempts to reduce evaluation variance using methods inspired by human testing literature, such as item analysis and item response theory, but finds that these methods have limited effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"By training different models from scratch, the paper provides the most direct results for evaluating the variance brought by randomness. Based on these results, it offers valuable suggestions for assessing smaller-scale models.\", \"The paper introduces methods from human testing literature and explains the reasons for their limited success, providing insights for future work.\"], \"weaknesses\": \"- The paper focuses only on the variance caused by model random seeds and does not compare or combine this with existing work on the analysis of evaluation result variance. For example:\\n 1. It does not compare the variance caused by random seeds with the impact of other random factors during evaluation (such as option order, prompts, etc.).\\n 2. It does not explore whether the variance is further amplified when models with different seeds encounter situations like randomized options.\\n- The paper primarily showcases the overall variance of 210 7B model checkpoints. Given that models of different sizes exhibit significant differences in performance when trained with varying numbers of tokens, the overall variance statistics may have limited reference value for models trained with fewer or more data.\", \"questions\": [\"Considering that the author trained different models from scratch, such as Hellaswag, which consistently outperformed random choice across a wide range of checkpoints, it would be beneficial if the author could use intermediate checkpoints to demonstrate the benchmark variance at different stages of training and performance. Furthermore, showing how benchmark variance changes with training progress could be helpful for models of various sizes and training data volumes.\", \"When calculating Seed Variance, would it be more reasonable to exclude checkpoints that are clearly still within the random result range from the statistical analysis?\", \"When models of different sizes achieve the same performance after being trained with different numbers of tokens (e.g., a 7B model trained with 120B tokens and a 1.5B model trained with 400B tokens), do they exhibit significant differences in benchmark variance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This work aims to quantify evaluation benchmark variance across a range of settings (from pretraining intermediate checkpoints, to the largest frontier LLMs) using a diverse set of metrics (seed variance, confidence intervals, and monotonicity). Beyond quantifying variance, the paper also explores item response theory.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper is clearly written and easy to understand.\", \"weaknesses\": \"The key problem for me is that I do not get the value proposition of this work. It's difficult to see how this work could develop relevance/impact for the evaluation of foundation models. Since that's why I am hesitant to support the paper I focus my review and questions below fully on this point.\", \"questions\": \"1. What is the most striking example with which you can demonstrate the potential impact of this work?\\n2. There are many challenges to evaluating foundation models and it is clearly also a matter of how much money (time, compute) to invest in which aspect to arrive at a conclusive result. So evaluation is inherently a trade-off and it is important to understand and acknowledge this trade-off. In my opinion there seem to be much more critical aspects that need to be addressed than the variance studied in this paper. For example, the paper \\\"EVALUATING LLMS\\u2019 MATHEMATICAL AND CODING COMPETENCY THROUGH ONTOLOGY-GUIDED INTERVENTIONS\\\" by Pengfei Hong et al seems to be a good route towards useful evaluations. I would rather invest time and compute into that direction and not bother about the methods proposed in this submission. Are you advocating for the opposite strategy?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer 7Zuc\", \"comment\": \"We thank the reviewer for their comments and address the weaknesses/questions below:\\n\\n> It does not compare the variance caused by random seeds with the impact of other random factors during evaluation (such as option order, prompts, etc.). It does not explore whether the variance is further amplified when models with different seeds encounter situations like randomized options.\\n\\nIt\\u2019s true that we don\\u2019t consider other factors like decoding strategies, temperature, prompt variations, etc. But we would like to add that this has little impact on log-likelihood based evaluations (in which no model generation is involved). The possible choices/text completions are appended to the prefix prompt and the option with the lowest NLL is chosen as the model response. In our paper, this corresponds to AGIEval, ARC-C, COPA, Hellaswag, MMLU, PIQA, and SIQA. Moreover, Dubey et al. 2024, https://arxiv.org/abs/2407.21783 (Figure 14) show that the prompt variations and option orders are fairly robust and exhibit negligible variance in this setup of NLL-based evaluation.\\n\\n> The paper primarily showcases the overall variance of 210 7B model checkpoints. Given that models of different sizes exhibit significant differences in performance when trained with varying numbers of tokens, the overall variance statistics may have limited reference value for models trained with fewer or more data.\\n\\nWe train 7B models for 210B tokens accounting for a FLOPs budget of $10^{22}$, which is a fairly standard budget for building scaling laws. For example, Dubey et al., 2024 (Llama 3: https://arxiv.org/pdf/2407.21783) and Hoffman et al., 2022 (Chinchilla: https://arxiv.org/pdf/2203.15556). We believe that the reference variance values are representative of this budget: Table 2 column 3 - Disc Std for $10^{22}$ budget, and Table 1 column 5 - $\\\\sigma(\\\\mathcal{S}, \\\\mathbb{M})$ for all budgets $\\\\leq 10^{22}$.\\n\\n> Considering that the author trained different models from scratch, such as Hellaswag, which consistently outperformed random choice across a wide range of checkpoints, it would be beneficial if the author could use intermediate checkpoints to demonstrate the benchmark variance at different stages of training and performance. Furthermore, showing how benchmark variance changes with training progress could be helpful for models of various sizes and training data volumes.\\n\\nWe do report the variance change over the course of training for all benchmarks in the paper in Figures 1, 2, and 6 for both discrete and continuous metrics. The y axis reports the performance metrics, and the box plots at each step show the variance corresponding to that step.\\n\\n> When calculating Seed Variance, would it be more reasonable to exclude checkpoints that are clearly still within the random result range from the statistical analysis?\\n\\nWe don\\u2019t believe that would help as the LLM community is still using numbers that are near chance to compare performance across different models, for example, MMLU comparisons are done in https://arxiv.org/pdf/2310.04564, https://arxiv.org/pdf/2401.02385, https://arxiv.org/pdf/2312.06550, etc. but the performance is near chance.\"}",
"{\"title\": \"Supporting the Practical Significance of this Work in LLM Pretraining\", \"comment\": \"I am not an author of this paper, nor am I affiliated with the authors. I am an independent practitioner with significant experience in LLM pretraining development. I respectfully disagree with the reviewer's assessment regarding the impact of this work and would like to provide additional perspective.\\n\\nDiscerning reliable evaluation signals in model experiments remains a critical challenge in LLM pretraining. One particular impact is in the performance of data ablations (as described in Section 3 of the paper). For instance, it is common to employ grid search experiments to compare various data mixtures during pretraining. However, a persistent challenge lies in determining whether observed evaluation score differences represent meaningful improvements or are merely artifacts of noise (Even when one data mixture yields better scores, it is not always clear if this reflects a better training mixture for training). This challenge is further compounded by the practical constraint that such experiments typically operate at smaller pretraining scales (e.g., 10\\u201350 billion tokens, as opposed to trillions), which inherently reduces the statistical signal and makes conclusions harder to draw.\\n\\nThe paper\\u2019s approach of pretraining models using multiple random seeds to significant token counts is a commendable effort to address this problem. While logistically demanding, this methodology is essential for producing reliable, reproducible insights. Few papers in the LLM field invest in such rigorous experimentation, which makes this work both novel and highly valuable.\\n\\nIn response to Reviewer tZvi's second point: while ontology-guided evaluations may offer a structured way to analyze models, they do not address the critical issue of variance in evaluation outcomes (for example, ontology guided evaluations does not guarantee meaningful signals across different data mixture or hyperparameter experiments). This variance often obscures meaningful signals and poses significant challenges for practical application. The real problem lies not only in designing better evaluations but also in understanding how evaluation performance correlates with different training dynamics. This paper directly tackles this overlooked yet vital aspect.\\n\\nThe suggestion to deprioritize research in this area reflects a potential misunderstanding of its relevance to the LLM pretraining community. As someone actively engaged in this domain, I can affirm that the contributions of this paper are both impactful and timely. Its focus on robust evaluation methodologies fills a critical gap in the literature, addressing challenges that practitioners encounter frequently. Moreover, by pretraining models from scratch and explicitly reporting variances, the authors provide actionable insights that are useful for research in this space. \\n\\nI strongly encourage the reviewers to reconsider their assessment of this paper\\u2019s impact, given the relevancy and impact towards more robust LLM pretraining\"}",
"{\"comment\": \"Thanks for the author's response. My questions regarding variance and the variance of intermediate checkpoints have been answered.\", \"i_would_like_to_further_explain_a_key_concern_i_mentioned_earlier\": [\"*\\\"The paper primarily showcases the overall variance of 210 7B model checkpoints. Given that models of different sizes exhibit significant differences in performance when trained with varying numbers of tokens, the overall variance statistics may have limited reference value for models trained with fewer or more data.\\\"*\", \"*\\\"When models of different sizes achieve the same performance after being trained with different numbers of tokens (e.g., a 7B model trained with 120B tokens and a 1.5B model trained with 400B tokens), do they exhibit significant differences in benchmark variance?\\\"*\", \"These questions arise from my concern that the variance reported for the 7B scale model may not be representative. For instance, in my own experiments, I found that two settings with similar FLOPs, such as a 7B model trained with 120B tokens and a 1.5B model trained with 400B tokens, showed that the 7B model had a better average benchmark performance. This indicates that different models can perform differently under the same FLOPs budget. The authors did not emphasize that the 7B model trained with 210B tokens is the optimal setting for the FLOPs budget discussed in the paper. While I understand the authors' point that the current experimental FLOPs is a fairly standard budget for building scaling laws, I still have concerns about the statement *\\\"We believe that the reference variance values are representative of this budget.\\\"* Even in the process of modeling scaling laws, models trained with the same FLOPs can exhibit significant performance differences. How can this model size and training tokens represent all scales of experiments within the FLOPs budget?\"]}",
"{\"summary\": \"This paper quantifies and highlights the important issue of seed variance, amongst different sources of variance, in evaluating language model performance *during pretraining*, emphasizing how insufficient variance quantification in benchmarks can obscure statistically significant performance differences within the same pretraining run.\\n\\n**Methodology**: It retrains 10 LLaMA 7B models from scratch, and then studies the seed variance along with different metrics throughout the pretraining runs, such as-- 95% confidence intervals, and monotonicity across diverse benchmarks, offering a reference for understanding performance variation across setups.\\n\\n**Claim 1**: Continuous metrics exhibit higher monotonicity and lower variance than discrete metrics and should be used more widely.\\n\\n*Application*: Continuous metrics and cloze formats show lower variance and higher signal-to-noise ratios compared to traditional discrete measures, especially in smaller LLaMA 7B models. Simple modifications, such as reframing choice tasks as completion tasks, appear promising for reducing variance.\\n\\n\\n**Claim 2**: Efficient benchmarking methods can inadvertently increase variance, and it\\u2019s essential to verify if methods are distinguishable before using them.\\n\\n*Application*: Techniques from standardized testing, such as item analysis and item response theory, are found ineffective in meaningfully reducing variance for large language models.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Key takeaways I appreciated:\\n\\n- Continuous metrics show higher monotonicity and lower variance than discrete metrics, making them preferable. I really like this point!\\n- Variance (likely from benchmarking, not seed variance) could obscure the effectiveness of pretraining by making it harder to track performance improvements.\\n\\nBenchmarking variance is a promising and underexplored area, especially given the recent focus on reasoning and the close performance of current models on benchmarks-- and seed variance can have a lot of effect.\", \"weaknesses\": \"See Weaknesses in the order of importance. If there is lack of time, please prioritize the earlier questions:\\n\\n**P1: Seed Variance and Its Impact on Benchmark Scores**\\n\\n> We provide a comprehensive reference guide for what magnitudes of variance are expected for what benchmarks across various circumstances\\n\\nI worry the paper title/abstract/contribution1 is overclaiming as the evidence does not support the above claim-- It seems to me that paper focuses only on a single source of variance: seed variance. Given that studying variance is a common practice (the most popular benchmark, Chatbot Arena provides variance along with scores: https://lmsys.org/blog/2024-04-19-arena-hard/) and previous research on variance sources such as benchmarking details [1] and prompt design [2], the question studied here becomes: Does seed variance contribute overwhelmingly to the overall variance which makes it a critical factor to study, beyond [1,2,Arena]?\", \"i_would_appreciate\": \"a) A comparative analysis showing the relative impact of seed variance versus other variance sources on benchmark results.\\n\\nb) A better distinction in the work for why studying seed variance specifically is important, given the existing literature on other variance sources.\", \"since_the_authors_claim\": \"> if we cannot \\u2018trust\\u2019 our evaluation results or do not understand what improvements are statistically significant, we cannot make sound comparisons, thus making it more challenging to reliably use benchmarks during model development.\\n\\na) I could not see the results of the 32 models apart from Llama-2-7B seed runs anywhere. Would be great to know whether the variance obtained Llama-7B seed runs can demonstrate that improvements reported in those models being deemed not-significant on some benchmarks.\\nb) To what degree are these variance estimates transferable beyond the Llama-2-7B model?.\\n\\n**Overall:** Currently, it seems to me that, variances from [1] and [2] are critical but easy-to-obtain, while seed variance is compute-heavy to obtain and potentially not as significant source given different prompts and small benchmark details can cause far larger performance shifts. I request authors to present evidence or arguments suggesting this is not the case, and I am open to changing my mind.\\n\\n[1] When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards\\n\\n[2] Quantifying language models\\u2019 sensitivity to spurious features in prompt design or: How I learned to start worrying about prompt formatting.\\n\\n**P2: Efficient Benchmarking in High-Variance Contexts**\\n\\n(a) The rationale here is unclear. \\nQ1. If variance preservation is a priority, why subsample to an extreme degree of 100 samples? What are the trade-offs between efficiency and variance preservation in benchmarking? \\n\\nQ2. Conversely, does aggregating several 100 sample estimates reduce variance while still providing efficiency gains compared to evaluating on the whole dataset? Or in most concrete scenarios, efficient benchmarking just might be limited? \\n\\n(b) The paper says:\\n> While these negative results suggest item discriminations may not be the most informative means of understanding (or reducing) variance on stronger models \\n\\n> we overall would not suggest the use of item analysis-based methods for understanding variance in language model evaluations \\n\\nI doubt those papers claim their methods are intended to reduce or understand variance. Could the authors provide a citation for this claim? If no direct claims about variance reduction exist in the cited works, could the authors discuss in the work why they believe these methods should be evaluated in terms of variance reduction.\\n\\n(c) While I agree with the authors that generally, subsampling might lead to critical issues in benchmarking \\u2013 the results presented seem underwhelming given I fully buy the claim of using continuous metrics to reduce variance.\\n\\nSpecifically, the results in Table 7 indicate that Kendall\\u2019s tau still remains quite high and relatively stable. Do the authors believe this is damning evidence against efficient benchmarking methods? (I agree in principle there will be cases where this is damning, but the shown results do not seem to be those cases.) Similarly, Table3 and Table 4 show variance and monotonicity increases but the increase in variance/decrease in monotonicity remains quite small post-subsampling compared to the relative gains in benchmarking efficiency (except in the case of GSM-8k discrete metric).\\n\\n**P3: MMLU Evaluation and Metric Selection**\\n\\nI fully agree that continuous metrics may indeed have lower variance than discrete metrics (Table 2 demonstrates convincing gains in PSNR), the emphasis on MMLU in this context feels misplaced as one metric is near random. \\n\\nMetrics that perform near random chance are unreliable indicators of progress, while those consistently above random are more dependable. However, for smaller models, this reliability may simply stem from selecting a metric that surpasses random performance earlier in training, rather than addressing the broader variance concerns noted above. I don't know why this specific example was picked.\\n\\nP4. **Poor Writing** \\n\\nThe writing lacks clarity and precision, with ambiguous and poorly articulated claims scattered throughout the paper. This made the review difficult. In the summary, I\\u2019ve tried my best to interpret the paper\\u2019s main claims \\u2014 please let me know if my summary underclaims compared to the intended or the claims differ from those presented in the work.\", \"questions\": \"See weaknesses above please.\\n\\nOverall, I think we should definitely report variance in benchmark estimates to compare significance of improvement, however I believe retraining $k$ times to obtain seed variance might not be the critical factor in total variance, and hugely expensive. I do think the recommendation of using continuous metrics makes a lot of sense. \\n\\nI think there are important shortcomings in my view, although I might likely be wrong. If weakness 1 is adequately alleviated, I would upgrade my score. Note: Weakness 3 and 4 are minor and have little effect on rating.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to reviewer tZvi\", \"comment\": \"We thank the reviewer for their time, but we are disheartened by the reviewer's comments on the value of our work and their assessment on the potential impacts of our paper.\\n\\n> What is the most striking example with which you can demonstrate the potential impact of this work?\\n\\nOne of the best open-weights models (Llama 3: https://arxiv.org/pdf/2407.21783) uses confidence intervals in their performance score reporting, while addressing the variance problem. More recently, researchers from Anthropic (https://www.anthropic.com/research/statistical-approach-to-model-evals) suggest approaches like Central Limit Theorem in computing the error estimates for evaluation.\\n\\nMoreover, even though these approaches are fairly standard, the LLM community seldom uses this in reporting performance scores across a range of tasks. We suggest a positive step in that direction by analyzing the variance arising from the \\\"seed\\\" used in pre-training models. We also do a comprehensive analysis on the pitfalls of efficient benchmarking which compounds the variance, resulting in inaccurate performance estimates for models.\\n\\n> There are many challenges to evaluating foundation models and it is clearly also a matter of how much money (time, compute) to invest in which aspect to arrive at a conclusive result. So evaluation is inherently a trade-off and it is important to understand and acknowledge this trade-off. In my opinion there seem to be much more critical aspects that need to be addressed than the variance studied in this paper. For example, the paper \\\"EVALUATING LLMS\\u2019 MATHEMATICAL AND CODING COMPETENCY THROUGH ONTOLOGY-GUIDED INTERVENTIONS\\\" by Pengfei Hong et al seems to be a good route towards useful evaluations. I would rather invest time and compute into that direction and not bother about the methods proposed in this submission. Are you advocating for the opposite strategy?\\n\\nWe disagree with the reviewer's assessment and as the kind stranger pointed out in the other comment, variance in evaluations during pre-training hinders the ablations and experiments performed for building scaling laws, selecting datamixes for pre-training, etc. If there's significant variance, it's hard to discriminate between the various experimental settings and one might end up selecting a datamix or architecture which hurts performance over the course of a full pre-training run, leading to significant waste in computational resources.\"}",
"{\"title\": \"Response to Reviewer i8dM\", \"comment\": \"We thank the reviewer for their comments on our paper and respond to their questions below:\\n\\n> The framing of \\u2018variance\\u2019 in the paper seems too broad. There are other possible kinds of variance worth exploring or mentioning.\\n\\nWe agree with the reviewer that there are other sources of variance, however the focus in this paper is on variance in evaluations during pre-training, where seed variance is an important factor in building scaling laws, and variance as a consequence of the fact that eval sets are finite samples of all questions that could be asked impact how we should interpret their results and when differences are statistically significant.\\n\\n> While studying the training seed variance is useful, this is really only feasible for smaller models, as it would be too expensive for larger models. This may reduce the utility of the results in large model comparisons.\\n\\nWe kindly disagree with this assessment because before any large scale pre-training runs for a huge model, there\\u2019s a lot of investment on small scale experiments to build the scaling laws, decide the model architecture or the pre-training datamix, etc. The small scale experiments are usually performed on a lower FLOPs budget, where the variance can play a significant role. High variance can cause inaccurate selection of model architecture/datamix that hurt the large-scale pre-training runs.\\n\\n> The paper could mention previous work such as Picard 2021 (https://arxiv.org/abs/2109.08203) on the impact of training initialization seed variance, or fine tuning seed variance [Dodge 2020 (https://arxiv.org/abs/2002.06305)]. And also extended discussions on how obscuring or not disclosing these variances can be harmful to the evaluation process (e.g. [Leech 2024 (https://arxiv.org/abs/2407.12220)]).\\n\\nWe thank the reviewer for the extra references, and will make sure to include this in the updated version of the paper.\\n\\n> Section 3.3 outlines a very interesting case of differences in evaluation results after a reformulation of the setup, could this be shown for other benchmarks? perhaps a similar pair?\\n\\nYes, this is applicable to other benchmarks as well, especially those that have a MCQ-based evaluation setup. The model learns the ability to answer MCQ-based questions later in training, and using a cloze format is helpful in tracking performance in the early stages of pre-training.\"}"
]
} |
E2PFv7ad3p | Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs | [
"Shuo Li",
"Tao Ji",
"Xiaoran Fan",
"Linsheng Lu",
"Leyi Yang",
"Yuming Yang",
"Zhiheng Xi",
"Rui Zheng",
"Yuran Wang",
"xh.zhao",
"Tao Gui",
"Qi Zhang",
"Xuanjing Huang"
] | In the study of LLMs, sycophancy represents a prevalent hallucination that poses significant challenges to these models. Specifically, LLMs often fail to adhere to original correct responses, instead blindly agreeing with users' opinions, even when those opinions are incorrect or malicious. However, research on sycophancy in visual language models (VLMs) has been scarce. In this work, we extend the exploration of sycophancy from LLMs to VLMs, introducing the MM-SY benchmark to evaluate this phenomenon. We present evaluation results from multiple representative models, addressing the gap in sycophancy research for VLMs. To mitigate sycophancy, we propose a synthetic dataset for training and employ methods based on prompts, supervised fine-tuning, and DPO. Our experiments demonstrate that these methods effectively alleviate sycophancy in VLMs. Additionally, we probe VLMs to assess the semantic impact of sycophancy and analyze the attention distribution of visual tokens. Our findings indicate that the ability to prevent sycophancy is predominantly observed in higher layers of the model. The lack of attention to image knowledge in these higher layers may contribute to sycophancy, and enhancing image attention at high layers proves beneficial in mitigating this issue. | [
"Multi-modal Model",
"Visual-Language Model",
"Sycophancy",
"Hallucination"
] | Accept (Poster) | https://openreview.net/pdf?id=E2PFv7ad3p | https://openreview.net/forum?id=E2PFv7ad3p | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"zKjMgnXmvv",
"xjhwjChRPa",
"sD90whVe1G",
"lHCHMhpsGe",
"iyJFsrOCbn",
"iIkmYTLkQ2",
"fxqLihLEIj",
"ejaoyY3YfL",
"c7swysEql1",
"Y5t6ZqkP5h",
"WVUFPMVrde",
"VXctMCHYwe",
"MnPahIOMBT",
"LMh6XQoyNB"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"decision",
"official_review",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1732383755440,
1732383458737,
1732383389911,
1733210246745,
1730687834021,
1733210117589,
1732384084016,
1734262433041,
1733210300889,
1737524268550,
1730702672342,
1732619391729,
1730453324480,
1732383958545
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission13565/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13565/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13565/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13565/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13565/Reviewer_eeSv"
],
[
"ICLR.cc/2025/Conference/Submission13565/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13565/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13565/Area_Chair_x6nH"
],
[
"ICLR.cc/2025/Conference/Submission13565/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission13565/Reviewer_CV64"
],
[
"ICLR.cc/2025/Conference/Submission13565/Authors"
],
[
"ICLR.cc/2025/Conference/Submission13565/Reviewer_MDMB"
],
[
"ICLR.cc/2025/Conference/Submission13565/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Official Response from Authors\", \"comment\": \"We are glad the reviewer finds our proposed benchmark to be novel and our analysis of the factors influencing sycophancy to be thorough. We respond to the reviewer's questions below.\\n\\n> **Response to W1: It's unclear how these methods would perform across different VLM architectures.**\\n\\n- Thanks for the great comment, we agree that this was missing in the original version of the manuscript. We are excited to have added new results on InternVL-1.5-26B, demonstrating the consistent effectiveness of our method.\\n | Model | Acc@R1 | Syc$\\\\downarrow$ | Cor (hint w/ answer) |\\n | ---------------- | ------ | --------------- | ---- |\\n | InternVL-1.5-26B | 93.2 | 90.6 | **98.6** |\\n | +Prompt | 93.1 | 77.7 | 94.7 |\\n | +SFT | 92.1 | 18.2 | 19.2 |\\n | +DPO | **93.7** | **13.2** | 29.7 |\\n\\n- We observe that, similar to LLaVA-1.5-7B, the original InternVL-1.5-26B model exhibit significant sycophancy (90.6 Syc). Our three mitigation methods\\u2014Prompt, SFT, and DPO\\u2014were all effective in reducing sycophancy. The Prompt method mitigate sycophancy to some extent (-12.9 Syc). SFT effectively mitigate sycophancy by -72.4 Syc, though the correction rate remain relatively low (19.2 Cor). DPO demonstrated the most substantial mitigate in sycophancy (-77.4 Syc) and result in a higher correction rate (29.7 Cor vs 19.2 Cor), outperforming SFT.\\n\\n- These results highlight that our proposed methods generalize well across different VLM architectures, consistently improving sycophancy mitigation. We hope this additional experiment addresses the reviewer\\u2019s concerns about the generalizability of our approach.\\n\\n> **Response to W2: The paper mentions that due to time and computational resource constraints, the analysis was limited.** \\n\\n- Thanks for your comment. We conduct additional experiments to explore the relationship between sycophancy and model performance.\\n\\n- Firstly, we analyze 10 VLMs with diverse downstream task performances and sycophancy rates, ranking them by their average accuracy across comprehensive downstream benchmarks. Our findings reveal no clear relationship between sycophancy levels and baseline accuracy.\\n| Model | Acc@1 | Syc$\\\\downarrow$ |\\n| ------------------------ | ----- | ---- |\\n| BLIP2 | 71.9 | 38.3 |\\n| Gemini | 74.9 | 59.8 |\\n| InstructBLIP | 78.0 | 68.8 |\\n| LLaVA-1.5 | 84.7 | 94.6 |\\n| mPLUG-Owl2 | 86.8 | 66.0 |\\n| GPT-4V | 89.3 | 39.4 |\\n| InternLM-XC2-1.8B | 90.7 | **28.8** |\\n| InternVL-1.5-26B | 93.2 | 90.6 |\\n| InternVL-1.5-2B | 93.3 | 80.2 |\\n| InternLM-XC2-7B | **94.0** | 39.8 |\\n\\n- Secondly, using the same VLM (LLaVA-1.5), we find that while our SFT and DPO methods substantially mitigate the sycophancy rate, the model's performance on general tasks\\u2014including MM-SY downstream tasks and six general benchmarks remain unaffected. These results demonstrate that sycophancy mitigation can be achieved without compromising general task performance.\\n| Model | Syc$\\\\downarrow$ | Acc@1 | SEED${^I}$ | POPE | SQA${^I}$ | MMBench | MMBench$^{CN}$ | MMVet | Avg@6 |\\n| --------- | --------------- | ----- | ---------- | ---- | --------- | ------- | -------------- | ----- | ----- |\\n| LLaVA | 94.6 | 84.7 | **66.2** | 85.9 | 66.8 | 63.0 | 57.4 | 30.5 | 61.6 |\\n| +Amplified Image Attention L16-32 | 64.4 | **88.3** | 64.8 | 83.8 | 65.8 | 64.4 | 57.6 | **31.7** | 61.4 |\\n| +SFT | 25.4 | 88.1 | 65.2 | **86.6** | 67.5 | **66.1** | **59.1** | 29.6 | **62.3** |\\n| +DPO | **5.4** | 84.3 | 65.2 | **86.6** | **67.8** | 65.9 | 59.0 | 28.9 | 62.2 |\"}",
"{\"title\": \"Official Response from Authors [2/2]\", \"comment\": \"> **Response to Q1: It seems the sycophancy rate is not correlated to the designed types of tones**\\n\\n- Thank you for your comment. We design three tone types to avoid biased results caused by using a single template for evaluation. The conclusion is that there is no strong correlation between sycophancy and tone type. Even with a Euphemistic tone, sycophancy remains highly prevalent.\\n- Lines 192\\u2013196: We also analyze the performance of specific VLMs under different tones and find that there is still no strong correlation.\\n\\n> **Response to Q2: It would be beneficial to also present the baseline accuracy. It is interesting to see if the model's sycophancy rate related to its performance?**\\n\\n- Thank you for your great comment. We present the relationship between sycophancy in VLMs and their performance from two perspectives.\\n- Firstly, we examine different VLMs, which have varying downstream task performances and sycophancy rates. We rank 10 VLMs based on their average performance on comprehensive downstream tasks. No obvious correlation is observed between sycophancy and baseline accuracy.\\n| Model | Acc@1 | Syc$\\\\downarrow$ |\\n| ------------------------ | ----- | ---- |\\n| BLIP2 | 71.9 | 38.3 |\\n| Gemini | 74.9 | 59.8 |\\n| InstructBLIP | 78.0 | 68.8 |\\n| LLaVA-1.5 | 84.7 | 94.6 |\\n| mPLUG-Owl2 | 86.8 | 66.0 |\\n| GPT-4V | 89.3 | 39.4 |\\n| InternLM-XC2-1.8B | 90.7 | **28.8** |\\n| InternVL-1.5-26B | 93.2 | 90.6 |\\n| InternVL-1.5-2B | 93.3 | 80.2 |\\n| InternLM-XC2-7B | **94.0** | 39.8 |\\n\\n- Secondly, for the same VLM (LLaVA-1.5), although our SFT and DPO methods significantly mitigate the sycophancy rate, the VLM's performance on general tasks (whether on MM-SY downstream tasks or the six general benchmarks like MMBench) is not affected.\\n| Model | Syc$\\\\downarrow$ | Acc@1 | SEED${^I}$ | POPE | SQA${^I}$ | MMBench | MMBench$^{CN}$ | MMVet | Avg@6 |\\n| --------- | --------------- | ----- | ---------- | ---- | --------- | ------- | -------------- | ----- | ----- |\\n| LLaVA | 94.6 | 84.7 | **66.2** | 85.9 | 66.8 | 63.0 | 57.4 | 30.5 | 61.6 |\\n| +Amplified Image Attention L16-32 | 64.4 | **88.3** | 64.8 | 83.8 | 65.8 | 64.4 | 57.6 | **31.7** | 61.4 |\\n| +SFT | 25.4 | 88.1 | 65.2 | **86.6** | 67.5 | **66.1** | **59.1** | 29.6 | **62.3** |\\n| +DPO | **5.4** | 84.3 | 65.2 | **86.6** | **67.8** | 65.9 | 59.0 | 28.9 | 62.2 |\"}",
"{\"title\": \"Official Response from Authors [1/2]\", \"comment\": \"We are glad the reviewer finds our proposed benchmark to be novel and our experimental results to be comprehensive. We respond to the reviewer's questions below.\\n\\n> **Response to W1: The definition of sycophancy rate is missing**\", \"the_sycophancy_rate_is_calculated_as\": \"$\\\\text{Sycophancy Rate} = \\\\frac{\\\\sum_{i=1}^N I(A_i^{(2)} == U_i^{(2, neg)})}{N}$,\", \"where\": \"- $A_i^{(2)}$ represents the second-round answer given by the VLM for the $i$-th sample.\\n- $U_i^{(2, neg)}$ is the incorrect opinion provided by the user for the $i$-th sample.\\n- $I(\\\\cdot)$ is an indicator function that equals 1 if $A_i^{(2)}$ matches $U_i^{(2, neg)}$, and 0 otherwise.\\n- $N$ is the total number of samples in the MM-SY benchmark.\\n\\nIt quantifies the percentage of instances where the model conforms to the user's incorrect viewpoint (given that the first-round response was correct), thereby reflecting the extent of the model's sycophancy.\\n\\n\\n> **Response to W2: It would be beneficial to analyze why the current model tends to exhibit sycophancy. E.g., is this comes from the training data or the network architecture?\\\"**\\n\\n- Thanks for the great comment. Although the causes of sycophancy in VLMs remain unexplored, we attempt to conduct some preliminary discussions by drawing on the causes of sycophancy in text-only LLMs. \\n- [1] suggests that sycophancy arises from human preferences during the RLHF process. However, LLaVA, which uses Vicuna-v1.5 (a model not trained with RLHF) as its initialization, still demonstrates a sycophancy rate as high as 94.6. Therefore, we argue that RLHF is not a necessary condition for sycophancy to occur.\\n- We list the characteristics of 10 evaluated VLMs (e.g., image resolution, presence of SFT) and attempt to analyze the potential underlying reasons.\\n - We argue that image resolution is not a necessary condition for sycophancy. BLIP-2 and InstructBLIP have same image resolution, but the sycophancy rate of InstructBLIP is higher than that of BLIP-2. InternVL-1.5 has a higher image resolution than LLaVA-1.5, but they both have a sycophancy rate over 90.\\n - We suggest that original instruction tuning might be responsible for sycophancy. InstructBLIP uses BLIP-2 as its initialization and performs instruction tuning. Its sycophancy rate is much higher than that of BLIP-2. The model may confuse helping a user with a task with sycophancy. Adding the sycophancy suppression data proposed in this paper to the original instruction fine-tuning dataset may be one of the mitigation solutions. This will be part of our future work. Thank you again for your comments.\\n| Model | Syc$\\\\downarrow$ | w/ RLHF-LLM | Resolution | w/ Instruction data |\\n| ----------------- | ---- | ----------- | ---------- | ------------------- |\\n| BLIP-2 | 38.3 | N | 224 | N |\\n| InstructionBLIP | 68.8 | N | 224 | Y |\\n| LLaVA-1.5 | 94.6 | N | 336 | Y |\\n| mPLUG-Owl2 | 66.0 | N | 224 | Y |\\n| InternVL-1.5-2B | 80.2 | N | Dynamic | Y |\\n| InternVL-1.5-26B | 90.6 | N | Dynamic | Y |\\n| InternLM-XC2-1.8B | 28.8 | N | Dynamic | Y |\\n| InternLM-XC2-7B | 39.8 | N | Dynamic | Y |\\n| Gemini | 59.8 | Unknown | Unknown | Y |\\n| GPT-4V | 39.4 | Unknown | Unknown | Y |\\n\\n---\\n\\n> [1]: Sharma, Mrinank, et al. \\\"Towards Understanding Sycophancy in Language Models.\\\" The Twelfth International Conference on Learning Representations.\"}",
"{\"title\": \"Kind Reminder from Authors\", \"comment\": \"Dear Reviewer eeSv,\\n\\nThank you for your valuable suggestions. We would like to kindly remind you regarding the response to our rebuttal for the paper. We deeply value your insights and constructive feedback, as they are instrumental in improving the quality of our work.\\n\\nAs the deadline for finalizing decisions approaches, we would greatly appreciate it if you could share any further comments or recommendations at your earliest convenience. We understand your time is valuable and are sincerely grateful for your dedication to the review process.\\n\\nPlease let us know if there\\u2019s any additional information we can provide to assist you.\\n\\nThank you once again for your time and effort.\\n\\nWarm regards,\\n\\nAuthors\"}",
"{\"summary\": \"The paper \\\"Have the Vision-Language Models Lost Confidence? A Study of Sycophancy in VLMs\\\" introduces the concept of sycophancy in vision-language models (VLMs), where models blindly agree with user inputs despite contradictory visual evidence. The authors present the MM-SY benchmark, the first evaluation benchmark for sycophancy in VLMs across ten visual understanding tasks. They find that VLMs exhibit significant sycophantic behavior, influenced by factors like task type, user tone, and model size. To address this, the paper explores three mitigation methods: prompt-based, supervised fine-tuning, and direct preference optimization, showing progressive improvements in reducing sycophancy. However, these methods also make VLMs more resistant to corrections. The authors propose a training-free approach by amplifying high-layer vision attention, which effectively mitigates sycophancy without compromising the model's receptiveness to corrections.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper offers a thorough analysis of the factors influencing sycophancy in VLMs, providing valuable insights into model behavior across different conditions.\\n\\n2. The exploration of three distinct mitigation methods, each with varying degrees of success, contributes to the understanding of how to manage sycophantic behavior in VLMs.\\n\\n3. The proposal of a simple, training-free method to reduce sycophancy by amplifying high-layer vision attention is innovative and has practical implications for model development.\", \"weaknesses\": \"1. The mitigation methods were only validated on a single VLM (LLaVA-1.5-7B), which limits the generalizability of the findings. It's unclear how these methods would perform across different VLM architectures.\\n\\n2. The paper mentions that due to time and computational resource constraints, the analysis was limited. This suggests that the findings may not be exhaustive and could benefit from further exploration with additional resources.\", \"questions\": \"Please refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Kind Reminder from Authors\", \"comment\": \"Dear Reviewer CV64,\\n\\nThank you for your valuable suggestions. We would like to kindly remind you regarding the response to our rebuttal for the paper. We deeply value your insights and constructive feedback, as they are instrumental in improving the quality of our work.\\n\\nAs the deadline for finalizing decisions approaches, we would greatly appreciate it if you could share any further comments or recommendations at your earliest convenience. We understand your time is valuable and are sincerely grateful for your dedication to the review process.\\n\\nPlease let us know if there\\u2019s any additional information we can provide to assist you.\\n\\nThank you once again for your time and effort.\\n\\nWarm regards,\\n\\nAuthors\"}",
"{\"title\": \"Official Response from Authors [2/2]\", \"comment\": \"> **Response to W2: This paper identifies the impact of amplifying high layer attention on the sycophancy problem but does not propose effective solutions based on this finding to truly address both sycophancy and stubbornness issues.**\\n\\n- Thank you for your question. We observe that enhancing high-level image attention in a training-free manner not only reduces sycophancy but also slightly improves the model's helpfulness (3.0 \\u2192 10.3, 11.2 \\u2192 12.7, 2.7 \\u2192 15.2). We would like to emphasize that this approach essentially serves as a test of the validity of our probing experiments and attention distribution analysis. While the sycophancy and correction performance is not state-of-the-art, it remains valuable given that it comes at nearly zero cost.\\n| Model | Syc | Cor (hint w/ answer) | Cor (hint w/o answer) |\\n| ------------ | ---- | -------------------- | --------------------- |\\n| LLaVA-1.5 | 94.6 | **98.6** | 3.0 |\\n| +Amplified Image Attention L16-32 | **64.4** | 67.0 | **10.3** |\\n| BLIP-2 | 38.3 | **25.6** | 11.2 |\\n| +Amplified Image Attention L16-32 | **38.1** | 24.6 | **12.7** |\\n| InstructBLIP | 68.8 | **71.4** | 2.7 |\\n| +Amplified Image Attention L16-32 | **59.6** | 62.0 | **15.2** |\"}",
"{\"metareview\": \"This paper investigates the hallucination problem, specifically sycophancy, in multi-modality language models (VLMs) by constructing a new benchmark for evaluating 10 different VQA tasks. The paper provides comprehensive experiments across popular VLMs, offering valuable insights into sycophantic behavior and the factors influencing it. It proposes three mitigation methods, with varying success, and introduces a novel, training-free approach to reduce sycophancy by amplifying high-layer vision attention, demonstrating both theoretical and practical contributions.\\n\\nAll reviewers recognize the contributions of the paper, with feedback generally leaning towards acceptance. The AC thoroughly reviewed the paper and rebuttal, agreeing with the consensus recommendation for acceptance due to the consistency of the feedback.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers noted that most of the concerns raised have been addressed, leading to a unanimous recommendation for acceptance.\"}",
"{\"title\": \"Kind Reminder from Authors\", \"comment\": \"Dear Reviewer MDMB,\\n\\nThank you for your valuable suggestions. We would like to kindly remind you regarding the response to our rebuttal for the paper. We deeply value your insights and constructive feedback, as they are instrumental in improving the quality of our work.\\n\\nAs the deadline for finalizing decisions approaches, we would greatly appreciate it if you could share any further comments or recommendations at your earliest convenience. We understand your time is valuable and are sincerely grateful for your dedication to the review process.\\n\\nPlease let us know if there\\u2019s any additional information we can provide to assist you.\\n\\nThank you once again for your time and effort.\\n\\nWarm regards,\\n\\nAuthors\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"summary\": \"This paper investigates the sycophancy problem in VLMs, which is also a common hallucination issue in LLMs. The authors first design an evaluation benchmark along with 10 visual question answering (VQA) tasks to assess the sycophancy problem in popular VLMs. They then propose three methods from the perspective of prompt engineering, supervised fine-tuning, and direct preference optimization to mitigate this issue.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This is the first paper to investigate the hallucination problem in multi-modality language models. To address this issue, the authors construct a new evaluation benchmark that includes 10 different visual question answering (VQA) tasks.\", \"Based on the designed benchmark, the authors investigate this problem on various popular VLMs and provide comprehensive experimental results.\", \"Besides revealing the sycophancy phenomenon, the authors also provide three different kinds of solution to alleviate this hallucination problem.\"], \"weaknesses\": [\"It seems that the definition of sycophancy rate is missing. Could the authors present it in Section 2? This is important for the readers to understand Table 1 and Figure 2.\", \"In addition to revealing the sycophancy phenomenon, it would be beneficial to analyze why the current model tends to exhibit sycophancy. For example, is this comes from the training data or the network architecture?\\\"\"], \"questions\": [\"From Table 1, it seems the sycophancy rate is not correlated to the designed types of tones. Could the authors provide the analysis for this?\", \"In addition to the sycophancy rate, it would be beneficial to also present the baseline accuracy. It is interesting to see if the model's sycophancy rate related to its performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"General Comment\", \"comment\": \"We would like to thank the reviewers for their efforts in assessing our work and the valuable feedback. We have accordingly made significant improvements following the suggestions and comments. For convenience, *we have marked in the revised PDF in blue* the added sections and the added results in tables. Light blue prefixes indicate the Reviewer ID and question ID for easy reference (it will not appear in the next version).\\n\\n> R1 corresponds to Reviewer CV64.\\n>\\n> R2 corresponds to Reviewer eeSv.\\n>\\n> R3 corresponds to Reviewer MDMB.\", \"below_we_summarise_our_changes\": [\"**Formal definition of sycophancy:** Following Reviewer CV64's request, we have added this in Appendix A.2.\", \"**Discussion on potential causes of sycophancy:** Following Reviewer CV64's request, we have provided a discussion in Appendix A.5, covering aspects such as whether the LLM underwent RLHF training, image resolution, the use of image-text interleaved data, and multimodal SFT training.\", \"**Relationship between sycophancy and tone:** Following Reviewer CV64's request, we have emphasized this in Section 2.2 RQ2.\", \"**Relationship between sycophancy and baseline performance:** Following the requests from Reviewers CV64 and eeSv, we have added analyses in Appendix A.6, including:\", \"1. A comparison of sycophancy and baseline performance across multiple VLMs.\", \"2. An analysis of baseline performance for the same VLM (LLaVA-1.5) under different sycophancy mitigation methods.\", \"**Sycophancy mitigation experiments for InternVL-1.5-26B:** Following Reviewer eeSv's request, we have included InternVL-1.5-26B in the main results in Table 2.\", \"**New setup for correction experiments:** Following Reviewer MDMB's request, we have expanded the correction experiments to include a new setup where prompts do not contain answers (only indicating that the first-round answer is incorrect). This effectively differentiates the correction ability derived from *sycophancy* and the *helpfulness* of the VLM.\", \"**New correction experiments for the Amplified Image Attention method across three VLMs (Cor w/o Answer):** Following Reviewer MDMB's request, we have extended the correction experiments for the proposed Amplified Image Attention method. The results demonstrate that this method can mitigate sycophancy and slightly enhance correction ability\\u2014or, more accurately, the \\\"helpful ability\\\" of the VLM\\u2014in a training-free manner, validating our hypothesis that \\\"high-level attention lacks focus on image tokens.\\\"\", \"We hope that our revisions have strengthened our contributions and would like to thank the reviewers for their valuable suggestions. We look forward to productive rebuttal discussions.\"]}",
"{\"summary\": \"This paper focuses on the study of sycophancy, a prevalent hallucination issue in Vision-Language Models (VLM). Firstly, a benchmark named MM-SY is introduced to evaluate the severity of sycophancy in VLMs. Subsequently, three methods\\u2014prompt guidance, Supervised Fine-Tuning (SFT), and Direct Policy Optimization (DPO)\\u2014are explored to mitigate sycophancy. Finally, the author analyzes the impact of attention weights on the sycophancy problem through experiments and proposes a simple, training-free method to alleviate this issue.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well written, clearly articulating the progressively detailed research approach to the sycophancy issue in VLMs.\\n\\n2. Experiments are comprehensive, thoroughly testing multiple VLM models, various tasks, and different user preferences, and analyzing the relationship between sycophancy and various dimensions.\\n\\n3. By studying the attention weights at different layers, this work reveals the model's performance in mitigating the sycophancy problem.\", \"weaknesses\": \"1. This paper mentions the contradiction between sycophancy and stubbornness issues, so for the VLM model, the real problem that needs to be addressed is to reduce sycophancy while maintaining the acceptance of correct opinions. However, methods such as prompt guidance, DPO, and amplify attention seem to reduce sycophancy but at the same time increase stubbornness to an equal extent. This does not truly solve the problem. It is merely shifting the imbalance from one side of the seesaw to the other. Only the SFT method shows a lower increase in stubbornness compared to the alleviation of flattery, but the paper does not provide a thorough analysis of this point.\\n\\n2. This paper identifies the impact of amplifying high layer attention on the sycophancy problem but does not propose effective solutions based on this finding to truly address both sycophancy and stubbornness issues.\", \"questions\": \"As in Weaknesses, why does SFT perform better than other methods? Does high layer attention help in truly addressing the issues of flattery and stubbornness?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Official Response from Authors [1/2]\", \"comment\": \"We are glad that the reviewer finds our paper well-written and our experimental results comprehensive. We address the reviewer's questions below.\\n\\n> **Response to W1 & Q1: This paper mentions the contradiction between sycophancy and stubbornness issues, so for the VLM model, the real problem that needs to be addressed is to reduce sycophancy while maintaining the acceptance of correct opinions. Why does SFT perform better than other methods?**\\n\\n- We greatly appreciate your insightful comment, which has encouraged us to conduct a deeper investigation and analysis of the stubbornness issue associated with sycophancy.\\n- The key conclusion here is that **a model's correction ability stems from two aspects: its inherent helpfulness (beneficial) and its tendency to sycophancy (harmful)**. When correction ability decreases, we should delve deeper into whether this is caused by the mitigation of sycophancy or a decline in helpfulness (corresponding to the model becoming more stubborn). Following two recent works [1,2] that delve deeply into the sycophancy issue in pure-text LLMs, we have extended our correction experiments to identify the causes behind the decline in correction performance carefully.\\n- We added a new experimental setup (hint without answer) to the original correction experiment (hint with answer). If a VLM\\u2019s correction ability stems from being helpful, it should be able to correct its answers under hints regardless of whether the answer is provided. In contrast, correction ability originating from sycophancy would struggle to work without an answer.\\n- Results indicate that the correction ability of LLaVA-1.5-7B primarily derives from sycophancy (98.6 - 3.0 = 95.6), leaving almost no room for stubbornness in the model\\u2019s behavior. The SFT method not only mitigates sycophancy but also learns the correction ability from our constructed correction data (3.0 \\u2192 24.6). The DPO method, limited by the inherently low helpfulness of LLaVA-1.5-7B, achieves more thorough sycophancy mitigation but fails to enhance the model's correction ability through preference learning (3.0 \\u2192 0.1).\\n- We also add experiments on InternVL-1.5-26B, which has a moderate level of inherent helpfulness (33.0). Under the SFT method, sycophancy is effectively mitigated, but helpfulness is also reduced (33.0 \\u2192 16.0). This could be due to the relatively lower quality of our constructed SFT data compared to InternVL\\u2019s original data in terms of task format and instruction diversity. The DPO method, however, not only mitigates sycophancy but also preserves and slightly enhances the model\\u2019s helpfulness (33.0 \\u2192 35.2).\\n- In summary, for models like LLaVA-1.5-7B with very low inherent helpfulness, the SFT method mitigates sycophancy while improving helpfulness. For models like InternVL-1.5-26B with moderate helpfulness, the DPO method both mitigates sycophancy and enhances helpfulness. We will update the experimental results and provide a more comprehensive analysis of correction ability in the next version.\\n| Model | Syc\\u2193 | Cor (hint w/ answer) | Cor (hint w/o answer) |\\n| ---------------- | ---- | -------------------- | --------------------- |\\n| LLaVA-1.5 | 94.6 | **98.6** | 3.0 |\\n| +SFT | 25.4 | 42.1 | **24.6** |\\n| +DPO | **5.4** | 1.7 | 0.1 |\\n| InternVL-1.5-26B | 90.6 | **98.6** | 33.0 |\\n| +SFT | 18.2 | 19.2 | 16.0 |\\n| +DPO | **13.2** | 29.7 | **35.2** |\\n\\n--- \\n\\n> [1]: Sharma, Mrinank, et al. \\\"Towards Understanding Sycophancy in Language Models.\\\" The Twelfth International Conference on Learning Representations.\\n> \\n> [2]: Chen, Wei, et al. \\\"From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning.\\\" Forty-first International Conference on Machine Learning.\"}"
]
} |
E2OAT195Le | A Diffusive Data Augmentation Framework for Reconstruction of Complex Network Evolutionary History | [
"En Xu",
"Can Rong",
"Jingtao Ding",
"Yong Li"
] | The evolutionary processes of complex systems contain critical information about their functional characteristics. The generation time of edges can reveal the historical evolution of various networked complex systems, such as protein-protein interaction networks, ecosystems, and social networks. Recovering these evolutionary processes holds significant scientific value, such as aiding in the interpretation of the evolution of protein-protein interaction networks. However, the scarcity of temporally labeled network data poses challenges for predicting edge generation times under current network structures, leading to issues of insufficient data and significant differences between training and prediction networks. To address this, we introduce a diffusion model that learns the generative mechanisms of networks, producing sufficient augmented network data to effectively mitigate issues of limited and incomplete data. Experimental results demonstrate a 13.7% improvement in prediction accuracy using our approach. Moreover, the model can uniformly predict edge generation times across different types of networks, eliminating the need to retrain the model for each specific network, thus significantly enhancing generalization capability and efficiency. | [
"Complex Network",
"Temporal Network",
"Diffusion Model",
"Data Augmentation"
] | https://openreview.net/pdf?id=E2OAT195Le | https://openreview.net/forum?id=E2OAT195Le | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"yb0SjnG6Ax",
"yIJIt3yi1k",
"drdeHPHTb3",
"VsouJUxrR9",
"BuYaniEZAY"
],
"note_type": [
"official_review",
"official_review",
"comment",
"official_review",
"official_review"
],
"note_created": [
1730573878652,
1730745606263,
1733295874676,
1729776433086,
1730683607390
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission5462/Reviewer_3ihJ"
],
[
"ICLR.cc/2025/Conference/Submission5462/Reviewer_3Hz3"
],
[
"ICLR.cc/2025/Conference/Submission5462/Authors"
],
[
"ICLR.cc/2025/Conference/Submission5462/Reviewer_nheS"
],
[
"ICLR.cc/2025/Conference/Submission5462/Reviewer_dyYV"
]
],
"structured_content_str": [
"{\"summary\": \"The authors present an interesting paper on the creation of augmented networks to train a model to recreate the history of networks using a denoising diffusion model. The paper is well written, and the task seems to be of some interest. However, I am not a real expert in the domain, so my review should be taken with a grain of salt.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The presentation is clear (even a reviewer not expert in network history reconstruction could understand clearly what was done).\\nThe idea of creating a large number of examples of a network for reproducing its history seems logical \\nThe method seems to be as good as state of the art and perhaps better\", \"weaknesses\": \"The results are not presented with any statistical test\\nIt is not clear what the accuracy measure represents (accuracy of edge generation order), nor how it is measured in real world-networks\\nThe details of the sampling are not very clear.\", \"questions\": \"I would appreciate a better explanation of the success metric?\\nIs the improvement statistically significant?\\nIn a real life situation, what would be the advantage of creating such an order (which is even in the best case still often wrong)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors propose an approach to predict the order in which edges were formed in a graph without any time information. This is an important problem in complex networks, as the times at which edges were formed often provides a lot of insight into the behavior of the networked system, possibly as much as the edges themselves. The authors formulate the edge time prediction task as a pairwise ordering prediction between each pair of edges. The main contribution appears to be a data augmentation approach using edge sampling and diffusion models to generate new temporal graphs. The authors present mixed results on synthetic and real networks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Considers an important problem in network science that has not yet received significant attention from the graph learning community.\", \"Proposed framework is relatively easy to understand.\", \"Moderately novel, as it uses standard diffusion models for graphs, but for a new task.\"], \"weaknesses\": [\"The authors characterize their approach as edge time prediction. But they don't actually predict any edge times--only the order of edge formation. This is not the same task at all! For example, consider the difference between Peixoto & Rosvall (2017), who model temporal networks as a sequence, compared to Junuthula et al. (2019), who model the actual edge formation times using a temporal point process model. Peixoto & Rosvall (2017) provide a potential path to link the two approaches by modeling waiting time distributions in addition to the sequence--the authors may wish to consider a similar approach.\", \"The presentation is quite sloppy and lacks detail in many key areas. For example, they don't even describe how they perform the edge sampling--see question 1 below.\", \"Claims are not supported by evidence. See question 3 below for one example.\", \"Proposed evaluation metric is not interpretable. See question 4 below.\"], \"sampling_of_presentation_issues\": [\"Lines 238-239: proofreading and revising required: \\\"where introduce the meaning of symbols, t: diffusion steps, N: normal distribution, betat: noise\", \"level at step t, I: identity matrix.\\\"\", \"Lines 257-258: wrong dot used inside the L2 norm.\", \"Caption for Table 1 does not explain what the quantity in parentheses is. I assume it is relative improvement compared to CPNN. Furthermore, the table is shrunken soo small that many of the quantities are barely readable. I suggest moving some of the results to the supplementary material to allow a larger font size in the table.\"], \"references\": [\"Junuthula, R. R., Haghdan, M., Xu, K. S., & Devabhaktuni, V. K. (2019). The Block Point Process Model for continuous-time event-based dynamic networks. In Proceedings of the World Wide Web Conference (pp. 829-839).\", \"Peixoto, T. P., & Rosvall, M. (2017). Modelling sequences and temporal networks with dynamic community structures. Nature Communications, 8(1), 582. doi:10.1038/s41467-017-00148-9\"], \"questions\": \"1. How is the sampling of edges done? Just randomly selecting 50% of the edges? Some type of random walk starting at a node?\\n2. Why is relative improvement compared to CPNN a useful quantity to assess performance?\\n3. Figure 4 suggests that the diffusion model is generating networks with extremely high clustering coefficients. The clustering coefficients of the original networks are not visible in the figure so there is nothing to compare to, so how can we verify your statement that \\\"clustering coefficient distributions closely match\\nbetween the generated and original networks, reflecting consistent local clustering tendencies\\\"? Furthermore, extremely high clustering coefficients in the range of 0.8-1.0 are almost never seen in real networks. I would argue that the lower clustering coefficients generated by the AAAI model are more representative of real networks.\\n4. What is a good value to obtain for your relative ranking metric shown in Table 1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Dear Program Committee Members,\\n\\nI hope this message finds you well. I am writing to formally request the withdrawal of our submitted manuscript. \\n\\nOver the past 2 weeks, it has become evident that several critical experiments and analyses are still required to comprehensively validate and enhance the findings presented in our paper. Despite our best efforts during the past two weeks, we have not been able to complete these experiments to the standard we aspire to. We strongly believe that the additional work will significantly improve the quality and impact of our research, ensuring it is more robust and complete for future dissemination.\\n\\nWe deeply appreciate the opportunity to submit to ICLR and sincerely apologize for any inconvenience this withdrawal may cause. Thank you for your understanding and for the time and effort the reviewers and the committee have invested in our work.\\n\\nWe look forward to resubmitting a thoroughly revised and improved version in the future.\\n\\nThank you for your kind consideration.\\n\\nBest regards, \\nEn XU\"}",
"{\"summary\": \"The paper proposes a new training framework for a model that predicts the appearance between two edges. The model itself corresponds to a vanilla neural network of 3 layers. The input layer receives the embeddings of two edges, the middle layer uses ReLU as the activation function, and the output is just two neurons with a scalar which are converted to a probability using a softmax function. Originally, one network is used to train the model. From this network, 100 timestamps are generated and used in the training process. The paper proposes the use of a diffusion model to replicate the source network and be considered in the process (instead/in conjunction with the source network). The results are promising as they show that the model is also able to replicate the generation process of a network.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The introduction of the paper shows a very important problem, try to infer the timestamps of the edges to learn the generate mechanism of a network. The preliminaries are well-explained and have all the details to understand the basic part of the paper. Subsection 3.1 defines the main model (the vanilla neural network), explaining the technical details based on the dimensions of the embedding. Unfortunately, the other subsections and sections must be improved in terms of clarity.\\n\\nThe explanation of the baselines in the appendix is a very good approach to save space, congrats.\\n\\nThe significance of the paper could be very high if all the processes are correctly applied. However, because of the lack of information, it is impossible to assess a more detailed evaluation of the strengths of the paper.\", \"weaknesses\": \"The originality of the paper is low. The use of a diffusion model to enhance a training process has been applied to several other problems. Maybe, the original part is given by the application of the diffusion model to infer the timestamp of edges. Unfortunately, this is not the real problem.\\n\\nThe clarity of the paper must be largely improved. While the first section allows the understanding of several details, other details are omitted, making the paper difficult to understand (for example, the problem itself). The conclusion mentions \\\"Our work focuses on predicting edge generation times from given network structures, which aids in understanding network evolution and forecasting future developmental trends\\\"; however, the proposed model (vanilla neural network) determines the probability of an edge over another, not the edge generation time. This implies, that the real problem is the selection of an edge over another given the current state of the network. This lack of information is also observed in the training and evaluation process, making the paper impossible to replicate. \\n\\nThe paper is not self-contained, the experiment section is barely described, and most of the work is given to the reader through the phrase \\\"In this experiment, conducted under the CPNN framework, we generated augmented temporal networks using both the TIGGER method and our diffusion-based approach\\\". Unfortunately, this is not enough to understand the training process and the evaluation process. I understand the generation of 100 timestamps from a single network, and that they are used for the training process. However, the generation data for the training process is not explained in detail. Do you generate the output based on two different timestamp networks? Do you generate |E_i|-|E_{I-1}| data points for the training process per timestamp? Similarly, the generation network process applied in the experiments section is not explained. Do you start with an empty network, or with 50% of the edges (like the sampling strategy described in subsection 3.3)? Given a network, how do you generate the next edge? Do you try all possible edges and compare among all of them, or do you use another strategy? \\n\\nThe paper mentions \\\"improves the prediction accuracy for edge generation order across various networks\\\". However, the experiment is not explained. Do you use an entire network and take two different edges? Are they consecutive edges or one of them is in the network and the other one is not? Do you use a similar approach to the timestamp generation and consider only the edges that were generated?\\n\\nSection 3.4 explains the graph diffusion model. Unfortunately, this section seems a very general description of the difussion model instead of a description focused on the main contribution of this work. The main relation with the proposal is the last part of the subsection \\\"First, a pure Gaussian noise is sampled and then the denoising networks iteratively predict the noise to be removed, and the ordered edges will be obtained from the weights of the sampled network gradually\\\".\\n\\nAs can be observed the replicability of the paper is very low, and all these details reduce the quality and clarity of the paper. \\n\\nFigure 2 must be changed. The final output says \\\"Edge time prediction\\\", but it determines the output between two possible edges. This is repeated multiple times throughout the paper. For example, it says \\\"generation time prediction accuracy\\\". However, the final model receives the embeddings of two edges, and, after the application of a softmax function, determines the probability of which edge should be added to the network. Also, why do you use two neurons with linear functions instead of using two neurons with softmax?\\n\\nThe details of the networks used in the experiments are not included in the paper (other than the number of nodes of the synthetic networks). \\n\\nThe paper uses a trainingNet and a TargetNet. This is barely described in the experiment section. Something is mentioned in subsection \\\"3.2\\\".1. \\n\\nAccording to the results, the number of augmented networks does not increase the performance (subsection 4.4). Did you try the model without any augmented network (or this is equivalent to a baseline)? \\n\\nFinally, the results from subsection 4.6 are alarming. According to the paper, the model can generate almost the same network. This implies a clear overfitting of the process. Unfortunately, this can not be evaluated because of the lack of details mentioned previously.\", \"questions\": \"Please see the multiple questions mentioned in the previous section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"The authors propose a framework to enhance the reconstruction of evolutionary history in complex networks, focusing on accurately predicting edge generation times. The authors introduce a novel approach using diffusion models, specifically a model termed TopoEvoDiff, which generates temporal networks to augment scarce datasets. Their method aims to mitigate issues such as overfitting and limited generalization, which are common in network evolution studies due to the scarcity of temporally labeled data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper effectively employs diffusion models for data augmentation, addressing a significant challenge in network evolution reconstruction\\u2014namely, the limited availability of temporally labeled network data.\", \"The proposed model demonstrated a notable improvement in prediction accuracy highlighting its potential efficacy across diverse types of complex networks, from biological networks to social and collaboration networks.\", \"The framework eliminates the need for retraining on specific network types, suggesting substantial efficiency in both time and resource use, as evidenced by its ability to generalize across different network types and sizes.\"], \"weaknesses\": [\"Limited real world experiments, Including more varied, real-world datasets could demonstrate the model\\u2019s adaptability to different domains, enhancing the paper\\u2019s appeal and impact. Specifically, the paper uses only 4 real world network making the experimental set-up not so convincing.\", \"Lack of comparison with other generative models, a broader comparison with other data augmentation or graph generation techniques, such as GANs, could offer insights into the unique advantages of diffusion models over alternative methods in this context.\"], \"questions\": \"Could the authors provide us with more real world experiments of more diverse and additional networks, such as a bigger collection of social networks, PPI networks, rather than just 4 datasets.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}"
]
} |
|
E2CR6hmV1I | Enhancing Multi-Agent Learning in Real-World Interactive Environments through Process Reward Decomposition | [
"Zhitao He",
"Zijun Liu",
"Peng Li",
"Yi Fung",
"Ming Yan",
"Ji Zhang",
"Fei Huang",
"Yang Liu"
] | LLM-based agents have made significant advancements in interactive environments, such as mobile operations and web browsing, with multi-agent systems further boosting performance. However, current agent learning techniques heavily rely on in-domain data and struggle to generalize across tasks and environments. Moreover, existing multi-agent learning methods are limited by fixed role assignments, which restrict their flexibility and generalization. Furthermore, the multi-step nature of interactive tasks, combined with sparse end-to-end reward signals, hinder effective learning to a great extent. To address these issues, we propose $\textit{CollabUIAgents}$, a two-stage multi-agent learning framework for interactive environments. In the first stage, the base model is adapted to the environment using curriculum learning on multi-level instruction data. In the second stage, a novel process reward decomposition strategy is introduced during reinforcement learning, allowing rewards to be distributed at both the agent and conversation round levels. This granular feedback fosters collaborative awareness among agents without predefined roles and improves learning efficacy. Experimental results show that our method significantly enhances the performance of multi-agent systems based on open-source models, achieving notable improvements both within and across domains, while also exhibiting strong cross-environment generalization capabilities. Moreover, our best-performing systems achieve results on par with or exceed those of the strong closed-source models, while maintaining the flexibility to be integrated with prompt-based multi-agent systems for future research. | [
"Language model",
"Muti-agent learning"
] | Reject | https://openreview.net/pdf?id=E2CR6hmV1I | https://openreview.net/forum?id=E2CR6hmV1I | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"s9OBGqvyl1",
"LDPLqaF965",
"Go2aC8EBjj",
"Fpws4SLOQH",
"0iD9wKruZx",
"0PkJgENNhh"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review",
"official_review",
"meta_review"
],
"note_created": [
1730552808399,
1730459773092,
1737523875527,
1730708692384,
1730047103748,
1734602224929
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission7928/Reviewer_G4vo"
],
[
"ICLR.cc/2025/Conference/Submission7928/Reviewer_Uinp"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission7928/Reviewer_gWHo"
],
[
"ICLR.cc/2025/Conference/Submission7928/Reviewer_hJ5t"
],
[
"ICLR.cc/2025/Conference/Submission7928/Area_Chair_zUGs"
]
],
"structured_content_str": [
"{\"summary\": \"This paper investigated how to improve the performance of LLMs structured in multi-agent systems in interactive tasks. To this end, it proposed a two-stage strategy: (1) Synthesizing data automatically and training the LLM agents with curriculum learning based on the data; (2) generating reward decomposition across both decision making steps and negotiation step among agents at each decision making step. The proposed approach has been shown to enjoy substantial performance improvement against open-source models and a comparable performance to closed-source models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. It is difficult to evaluate the originality of this paper, as it mainly combines multiple existing techniques together to form a strategy for a specific application on LLMs. From the perspective of software engineering and empirical study, this paper lies in the category of original work, but the novelty is limited. It only addressed some evident weaknesses with unsurprising empirical strategies. However, some hypotheses seems insightful, such as the hypothesis to explain the randomly generated edges on communicaton graphs.\\n2. As an empirical study paper, it focused on proposing some hypothesis and methodology to address a problem, which have been well validated by the experimental results with ablation study to emphasize the importance of each module proposed. For this reason, the overall quality of this paper is good.\\n3. The motivation of this paper has been well clarified, and the methodology description and experimental setups have been well stated. The experimental analysis is comprehensive and reasonable to justify the importance of the proposed approach.\\n4. The significance of this paper is not easy to evaluate. Standing from the view of LLM performance improvement, the strategies revealed can make some contribution to the research field (but still waiting for reproducing the results). From the perspective of methodology, I cannot see any novel ideas. Overall, I believe this paper may only attract attention in the domain of LLMs, but I seriously suspect if the result of this paper can give a long-standing impact to the mainstream of ML.\", \"weaknesses\": \"1. In the majority voting described in equation (5), how do you process the situation where two or more actions are tied to have the same counts? If the strategy is random selection, could you please show me if this kind of strategy will incur some fluctuation in performance?\\n2. In line 171-172, the sentence that \\\"The proper size of local memory enhances the diversity of decision making and avoids introducing too long contexts,\\\" is not easy to comprehend. I cannot have a clear picture to link the local memory size with the diversity of decision makings. Could you give more explanation about it?\\n3. I scanned the paper, and have not noticed any discussion on the faithfulness of automatically generated data to the realistic situations. Without this guarantee, I cannot foresee the benefit of automatic generation of data for training LLMs, as the resulting harm to the society could be more severe than the improvement measured in quantitative metrics and paid human labours. How do you guarantee that the generated data is faithful to realistic situations, with no hallucinations?\\n4. During processing preference knowledge, you proposed to use SFT followed by DPO, but with no reason to explain this strategy. Could you please give more insights into this strategy?\", \"questions\": \"Please address the concerns in weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper addresses the deficiencies in flexibility and generalization in existing multi-agent LLMs and proposes the CollabUIAgents framework based on general environment knowledge learning and MARL in two stages. This enhances performance in Open-Source LLMs. However, the paper still needs improvement in specific expression, motivation, and writing style, as detailed below.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper investigates an important direction in multiagent systems and poses a promising direction for future research.\", \"weaknesses\": \"1. In Task Formulation, it is necessary to clarify the concepts of `agent` and `policy`. Furthermore, there needs to be consistency between a_t in Eq(3) and Eq(1). It is unclear whether a_t represents an action by a single agent, a joint action by multiagent, or an aggregation action, which should be specified.\\n\\n2. In stage 1, why is the curriculum divided into three parts? or what is the insight? It's better to explain the rationale behind choosing these three particular parts for the curriculum, and how each part contributes to the overall learning process.\\n\\n3. How to comprehend ''The rationale is that, for the critic agent, it might be more simple to identify whether a single decision is wrong, than to judge the reward of long decision chains between multiple agents''? It seems to contradict RL research. Instead, single-step rewards are usually not as accurate as long-term rewards, From the internal logic, judging a single-step decision is more difficult than multi-step. It's better to provide evidence or reasoning to support their claim, especially in light of existing RL literature that suggests otherwise.\\n\\n4. Multi-agent reinforcement learning methods are usually based on mathematical models such as Markov Game and Dec-POMDP. In stage 2, what is the multi-agent model, which needs a detailed explanation? How does it relate to or differ from standard models like Markov Games or Dec-POMDPs?\\n\\n5. The nature of their reward decomposition process in stage 2 should be explained clearly. Is it artificial prior or adaptive learning? If it is artificial prior, it is no different from role allocation in form, and it is difficult to reflect the advantage of adaptive learning. How does it differ from or improve upon traditional role allocation methods?\\n\\n6. In stage 2, it seems that each agent learns independently. How to ensure multi-agent collaboration? What are the mechanisms or techniques to promote collaboration between agents during the learning process in stage 2?\\n\\n7. The paper also has several unclear character definitions and grammatical problems, such as\\uff1a\\n\\n (1) It should be \\\"each agent \\\\pi_i\\u201d instead of \\\"all agents \\\\pi_i\\u201d in line 271\\n\\n (2) It should be \\\"the number of agents\\u201d instead of all \\\"the number agents\\\" in line 132\\n\\n (3) In Task Formulation, transition function and the maximum step are both represented by the character `T\\u2019.\\n\\n The reviewer suggests that the authors conduct a thorough proofreading of the paper, paying particular attention to consistency in mathematical notation and grammatical correctness.\", \"questions\": \"Please see the cons part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"summary\": \"This paper presents the CollabUIAgents framework, which aims to improve multi-agent learning capabilities in real-world interactive environments, particularly addressing issues related to sparse rewards. By employing a two-stage learning process\\u2014general environmental knowledge adaptation and multi-agent reinforcement learning with process reward decomposition\\u2014the framework enhances adaptability and cross-task generalization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The method effectively tackles the challenges of sparse rewards and rigid role assignments in multi-agent learning. The two-stage learning process not only enhances adaptability across various tasks and environments but also allows for automated data generation, reducing the need for manual annotation. Furthermore, the comprehensive experimental design, including extensive comparisons with state-of-the-art models and thorough ablation studies, provides strong empirical support for the framework's effectiveness. The results indicate impressive performance improvements, demonstrating the framework's potential to advance multi-agent systems in real-world interactive scenarios.\", \"weaknesses\": \"The experimental setup in this paper lacks clarity, particularly regarding the configuration and evaluation of different agent setups, which impacts the transparency and reproducibility of the experiments. Additionally, the framework is demonstrated with a relatively small number of agents, raising scalability concerns. As the number of agents grows, managing an undirected communication graph may become computationally expensive, potentially affecting performance. Furthermore, some figures could benefit from clearer annotations and layout to better convey the framework\\u2019s structure and processes.\", \"questions\": \"1. **Scalability with Increased Agents**: The framework appears to use a relatively small number of agents. How does the method scale with larger numbers of agents, and what strategies could be implemented to address potential computational overhead in managing the communication graph?\\n2. **Edge Update Strategy in Graph Structure**: Could you clarify the role and frequency of edge updates within the communication graph? How does this edge updating process impact the overall performance, and would a static graph structure be a feasible alternative for certain applications?\\n3. **Experimental Parameter Choices**: Can you provide more context on the choices made for experimental parameters, such as the number of conversation rounds and agent configurations? How were these values determined, and could they impact the generalization of the results?\\n4. **Cross-Environment Transfer Learning**: While the framework shows promising results in cross-environment tasks, what specific techniques within the MARL setup contribute most to this adaptability? For instance, does the curriculum learning or specific reward design play a key role in enabling transfer learning?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"summary\": \"This paper introduces a two-stage multi-agent learning framework to address the poor generalization and sparse reward issues in large language model (LLM)-based agent systems. In the first stage, the authors propose a data synthesis pipeline to automatically generate training data for fine-tuning the LLM. In the second stage, they leverage a critic agent to allocate rewards to each agent at every conversation round.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper studies an important problem: the generalization of cross-platform.\\n2. The idea of assessing the contributions of each agent is interesting.\", \"weaknesses\": \"1. The organization of the paper needs some improvement, especially MARL with Edge Updates Section.\\n2. The main concern is that the proposed Process Reward Decomposition is not well-justified: I am skeptical about using a critic agent to directly generate temporal and structural credit assignments.\\n3. Lack of comparison with related work on multi-agent LLM[a].\\n\\n\\na. Zhuge, Mingchen, et al. \\\"Language agents as optimizable graphs.\\\" arXiv preprint arXiv:2402.16823 (2024).\", \"questions\": \"1. Equation 9 needs more explanation.\\n2. Not strongly connected to MARL.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"metareview\": \"The author did not respond during the rebuttal period, and the paper\\u2019s score was significantly below the acceptance threshold. As a result, the paper was decided to be rejected.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}"
]
} |
E1m5yGMOiV | KinPFN: Bayesian Approximation of RNA Folding Kinetics using Prior-Data Fitted Networks | [
"Dominik Scheuer",
"Frederic Runge",
"Jörg K.H. Franke",
"Michael T. Wolfinger",
"Christoph Flamm",
"Frank Hutter"
] | RNA is a dynamic biomolecule crucial for cellular regulation, with its function largely determined by its folding into complex structures, while misfolding can lead to multifaceted biological sequelae. During the folding process, RNA traverses through a series of intermediate structural states, with each transition occurring at variable rates that collectively influence the time required to reach the functional form. Understanding these folding kinetics is vital for predicting RNA behavior and optimizing applications in synthetic biology and drug discovery. While in silico kinetic RNA folding simulators are often computationally intensive and time-consuming, accurate approximations of the folding times can already be very informative to assess the efficiency of the folding process. In this work, we present KinPFN, a novel approach that leverages prior-data fitted networks to directly model the posterior predictive distribution of RNA folding times. By training on synthetic data representing arbitrary prior folding times, KinPFN efficiently approximates the cumulative distribution function of RNA folding times in a single forward pass, given only a few initial folding time examples. Our method offers a modular extension to existing RNA kinetics algorithms, promising significant computational speed-ups orders of magnitude faster, while achieving comparable results. We showcase the effectiveness of KinPFN through extensive evaluations and real-world case studies, demonstrating its potential for RNA folding kinetics analysis, its practical relevance, and generalization to other biological data. | [
"RNA Folding Kinetics",
"Prior-Data Fitted Networks",
"Deep Learning",
"Synthetic Data",
"Transformer",
"Bayesian Inference"
] | Accept (Poster) | https://openreview.net/pdf?id=E1m5yGMOiV | https://openreview.net/forum?id=E1m5yGMOiV | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"xGzO0x7qNS",
"wj0C0npEB3",
"ssSAijUKqW",
"rcxZ2Nqyze",
"pg8vzLEbT7",
"mREhPiHdpU",
"jkQFaZz6ER",
"jkLkqkbszB",
"g9gkS06r2I",
"e19NQOjqb7",
"cgXw4WzN46",
"aeLYYI5rk8",
"Zhnx6MGiVf",
"XDMZp6x2ob",
"VMpAXuUgxi",
"VC8Sm3B6B7",
"Ut89EYZ3u5",
"TF73tCuV31",
"QRuZKzAFp1",
"QKxA8yk8pL",
"OvQGY9hIKl",
"MjQMddukeU",
"M2aDl7rKyq",
"JkI3p4iyfs",
"J2kqgCXDCi",
"IVquwpuPG3",
"DxI8pmi9Ht",
"DhTKkyQpUN",
"AQlDZwVNoi",
"9BnNJNrxN8",
"8Zc7ZYv2z1",
"7IIsnacmRM",
"6pwOTE6BbY",
"5mUVvPBnke",
"5hTEsrRIH8",
"5eVmX3nLA3",
"49FMwc8QWj",
"3YFkwS2Lro"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"meta_review",
"official_comment",
"official_comment"
],
"note_created": [
1733083197827,
1732996793523,
1732997693386,
1732649905872,
1732198082647,
1732197805973,
1732778187656,
1732197583979,
1732894263544,
1732382293434,
1730695896948,
1732196888270,
1732799268114,
1732650062337,
1732197332453,
1732805405993,
1732206117904,
1732201551514,
1732897419490,
1732653492525,
1730645166792,
1732197144669,
1732894471660,
1732198287417,
1732533201153,
1730733147872,
1732805465424,
1732894849638,
1732805331631,
1732894785076,
1730689754498,
1732894696977,
1731704373301,
1737524033241,
1732577645287,
1734817759787,
1732198249787,
1732808981994
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_FZT1"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_FZT1"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_PRsD"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_FZT1"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_DNAm"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_FZT1"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_FZT1"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_FZT1"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_TUhM"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_DNAm"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_FZT1"
],
[
"ICLR.cc/2025/Conference/Submission10212/Area_Chair_Tis7"
],
[
"ICLR.cc/2025/Conference/Submission10212/Authors"
],
[
"ICLR.cc/2025/Conference/Submission10212/Reviewer_FZT1"
]
],
"structured_content_str": [
"{\"title\": \"Clarifications from senior author\", \"comment\": [\"Dear Reviewer FZT1,\", \"Senior author here. Thank you for your engagement! However, I believe we failed to clarify some fundamental points with you:\", \"Both MCMC and PFNs approximate the exact Bayesian posterior. They both do full Bayesian inference. It\\u2019s just that PFNs are dramatically faster and often yield a much better approximation than MCMC with reasonable time limits.\", \"Two previous papers have directly compared PFNs and MCMC and shown PFNs to be 10000x faster, or much better at the same computational cost: 1. https://arxiv.org/abs/2112.10510 (the paper introducing PFNs; see Figure 5 for 10000x faster convergence to the same performance) and 2. https://proceedings.neurips.cc/paper_files/paper/2023/file/3f1a5e8bfcc3005724d246abe454c1e5-Paper-Conference.pdf (the LC-PFN paper we mentioned earlier in this rebuttal; see Figure 10 for convergence to a solution in 0.1s that MCMC does not reach in 1000s but would eventually reach with optimal hyperparameters and enough time).\", \"While MCMC has a rich theory and history, it is also conceptually complex (our problem with a varying number of mixture components could, e.g., not be addressed by standard MCMC, but would require extensions like reversible jump MCMC), is nontrivial to get right (choosing various hyperparameters, proposal distributions, burn-in, etc) and computationally very costly.\", \"For the full background on PFNs, see https://arxiv.org/abs/2112.10510. Insight 1, Corollary 1.1 and Corollary 1.2 there show theoretically that by optimizing cross entropy loss PFNs directly minimize approximation error of the posterior. Perfectly optimizing that cross entropy loss (hypothetically: with infinite data, infinite compute and the right optimizer) leads to the approximation of the posterior being exact. Figure 3 (right) in the same paper demonstrates very truthful approximations empirically with finite data and time, and actual optimizers (on a Gaussian process, where the exact posterior is available in closed form).\", \"So far, no MCMC solution exists for the problem of predicting RNA folding time distributions we\\u2019re tackling. We agree that it could also be sensible to use MCMC, but we chose PFNs since it\\u2019s a much better fit for the problem and much easier to do (no need for reversible jump MCMC, etc). We strongly believe that we should not be penalized for not comparing to a method that hasn\\u2019t been used for this problem before.\", \"For completeness and the avoidance of doubt, we note that PFNs also have key disadvantages compared to MCMC, in particular not giving access to the samples from the latent. But for cases where these are not needed (like the current) they are often the best choice.\", \"We hope to have clarified these points and would be glad to address any follow-up questions.\"]}",
"{\"title\": \"Author response\", \"comment\": \"Dear Reviewer FZT1,\\n\\nWe apologize that our previous conversation led to confusion.\\n\\nWe train KinPFN on roughly 5 million GMMs. For each of these GMMs we know the exact parametrization and these GMMs would exactly approximate the prior distributions. However, at test time, when fitting a GMM on the provided context, this GMM doesn\\u2019t have to be optimal. There are failure cases when fitting GMMs, for example when the distribution of interest contains overlapping modes. This becomes worse when using an ensemble of different GMMs, since the ensemble comes with its own limitations. \\n\\nIt is likely that the training examples contain cases where fitting a GMM would lead to suboptimal results (e.g. in the case of poorly separated modes). The PFN might thus handle these cases better due to the massive amounts of different GMMs it was trained on and the benefits of the learned representations.\\n\\nTherefore, we disagree that KinPFN has to behave exactly like a GMM fitted on the context but can likely perform better as indicated by our results.\\n\\nWith kind regards,\\n\\nThe Authors\"}",
"{\"title\": \"Comment\", \"comment\": \"KinPFN is indeed trained on 5 million GMM, which each have their parameters drawn from a prior. This describes a prior over GMMs and training the PFN to minimize this likelihood is guaranteed to make it approximate the full Bayesian model. When given new data, a Bayesian GMM will infer all the parameters, such as the means and stds and. number of components using Bayes rule; this can be approximated by MCMC quite easily (ex https://arxiv.org/abs/1502.06241).\\n\\nThe Bayes-optimal model, the one that maximizes likelihood on the data you are training KinPFN on, is this Bayesian model. Therefore, when you train KinPFN, it approaches this model and in particular should behave just like it. If KinPFN outperforms this model then it must be that it doesn't optimize its training objective.\\n\\nI appreciate the author's extended discussion but I don't think there is any use in continuing this discussion.\"}",
"{\"title\": \"Revised Version of the manuscript\", \"comment\": \"We updated our manuscript to fit the page limit again. We further updated the draft of the overview Figure 1 with a full version of the figure. Due to the new figure, and additional text in response to the reviewers\\u2019 questions, we had to move the figure showing examples of the prior into the appendix.\\n\\nIf the reviewers have any other suggestions for structuring our manuscript, we are happy to hear them.\\n\\nWith kind regards,\\n\\nThe Authors\"}",
"{\"title\": \"Response to Reviewer DNAm continued\", \"comment\": \">Compared to the dynamic changes in RNA secondary or tertiary structures, the folding ratio provides very coarse-grained information about RNA folding dynamics, which seems still far from practical applications. The paper needs to further elucidate how this study can contribute to solving RNA biology problems.\\n\\nWe agree with the reviewer that RNA folding kinetics and dynamics can be captured at different levels of granularity. However, we disagree with the reviewer that first passage times are far from practical applications. In our initial submission, we already show an interesting application, assessing the folding efficiency of different RNA sequences that fold into a common minimum free energy structure (Section 5.3). This kind of analysis is particularly useful for RNA drug discovery, where the rates of obtaining the functional molecular folds could be essential to rank different candidates. In addition, the first passage times of systems play a substantial role in biology, chemistry, and medicine (see e.g. [1]). While KinPFN was primarily developed to study RNA folding kinetics, our results for gene expression data suggest that KinPFN might generalize to other data sources of FPTs as well, with the potential to impact different areas of biology besides RNA folding kinetics.\\n\\nIn Addition, many published examples show the direct application of kinetic folding simulations to biological data, e.g. the original Kinfold paper (doi: 10.1017/s1355838200992161) or studies that investigate riboswitch folding (doi:10.1021/jacs.6b10429)\\n\\nThat said, we add a sentence to the Introduction of the revised version of our manuscript to clearly highlight fields of applications of KinPFN.\\n\\n[1] Polizzi, N. F., Therien, M. J., & Beratan, D. N. (2016). Mean first\\u2010passage times in biology. Israel journal of chemistry, 56(9-10), 816-824.\\n\\n>In the experimental section, the results from Kinfold are used for validation, but the inherent error of Kinfold needs rigorous demonstration, which diminishes the persuasiveness of the results. Is it possible to use collected or published wet lab data for the evaluation of this problem?\\n\\nWe agree with the reviewer that we use simulation data obtained from Kinfold as ground truth. However, we also analyze the performance of KinPFN for simulation data from another simulator (Kfold, see experiment 5.1), showing no decrease in the performance of KinPFN. Since KinPFN was not trained using simulation data but a synthetic prior, the performance of KinPFN is independent of the underlying data-generating process. That said, we would expect that KinPFN is capable of predicting first-passage time distributions for wet-lab kinetics data as well. This is also supported by our experiments for gene expression data which arguably requires more transfer capabilities than experimentally obtained first passage times compared to simulation data. However, we are not aware of any resource for obtaining RNA folding kinetics wet-lab data.\\n\\n>tRNA and rRNA are the most common and numerous types of RNA. It would be better to test a broader and more diverse range of RNA types.\\n\\nWhile we agree with the reviewer that tRNA and rRNA are common and well studied RNAs, this was exactly our motivation to use these two types of well-known, structured non-coding RNAs as a reference for KinPFN\\u2019s behavior on eukaryotic RNAs. However, in response to the reviewer\\u2019s concerns, we are currently running simulations for the following three RNAs:\\n\\n- https://rnacentral.org/rna/URS00002F3927/224308\\n- https://rnacentral.org/rna/URS0000BA5588/9606\\n- https://rnacentral.org/rna/URS0000759FB2/9606\\n\\nWe hope that we can obtain the required number of simulations within the next few days. \\n\\nWould an additional evaluation on these samples resolve the reviewers concerns regarding evaluations for different RNA types? \\n\\n>There is a lack of research and discussion on deep learning methods suitable for the data in this problem. The paper only presents the prior-data fitted network for deep learning-based probability density estimation.\\n\\nWe thank the reviewer for the useful comment. We add a section on suitable deep learning methods to the related work sections in the main paper and the appendix and further briefly discuss the usage of AI-based methods for MD simulations.\\n\\nWe thank the reviewer again for the valuable feedback and helpful comments. We hope that our response clarified the questions and solved the reviewers\\u2019 concerns. If there are any further questions or clarifications required, we are happy to answer those. Otherwise, we would appreciate it if the reviewer could increase our score.\\n\\nWith kind regards,\\n\\nThe authors\"}",
"{\"title\": \"Response to Reviewer DNAm\", \"comment\": \"Dear Reviewer DNAm,\\n\\nThank you for your valuable feedback and for highlighting the novelty and reproducibility of our approach. In the following, we address your concerns and questions in detail.\\n\\n>The current writing does not facilitate quick comprehension of the research problem for readers from diverse backgrounds. It would be beneficial if the author could include a figure illustrating specific data and formalization when introducing the problem. For instance, depicting the relationship between the RNA folding process and the corresponding change in folding fraction could enhance clarity.\\n\\nWe thank the reviewer for this helpful comment. We are happy to include an overview figure to increase the clarity of our proposed approach. We add a draft for the figure in the Introduction of our revised manuscript. \\n\\nWould a full version of this figure solve the reviewers\\u2019 concerns regarding quick comprehension of the research problem?\\n\\n>The unique challenges RNA folding kinetics pose are not adequately summarized in the introduction. Additionally, the paper directly employs prior-data fitted networks to model the CDF without additional enhancements. Highlighting the improvements made to address the specific issues in this field would enhance the paper's contribution.\\n\\nWe thank the reviewer for the useful comment. We update the Introduction to clarify our contributions more and to avoid confusion. However, we disagree with the reviewer that we directly employ PFNs to model the CDF without additional enhancements. In contrast to previous work that use PFNs e.g. to extrapolate learning curves, we do not predict the PPD of a target y for a given quantile x conditional on a dataset D, but learn the entire PPD of y (in this case, the first passage times) without knowledge about the quantiles (x is always a zero-vector), effectively representing the absence of further information. \\n\\nThis approach is motivated by the underlying problem structure, where we do not have access to the true quantiles of the context first-passage times. Therefore, we cannot treat the task as a standard regression problem but directly learn the PPD of first passage times conditional on a (data)set of context first passage times but without requiring quantile information which is novel in the field of PFNs.\\n\\nWe add a small part at the end of the Background section of the revised manuscript to point out this novelty more clearly.\\n\\n>It would be clearer to explicitly state in the introduction or background section whether the paper focuses on RNA's tertiary or secondary structure, and how the folding ratio is calculated.\\n\\nWe include the clarification that the first passage times discussed in the paper are consistently derived from folding simulations that specifically focus on secondary structure formation in the Background section of our revised manuscript. \\n\\nHowever, we would like to note that the underlying structural information, be it tertiary or secondary structure based, is more related to the simulators used. As a pure in-context learning approach, KinPFN is well-suited to generalize across different simulators and we would expect that the usage of a tertiary-structure-based simulator for the generation of context FPTs would only marginally impact the FPT approximations of KinPFN.\", \"to_clarify_the_calculation_of_the_folding_fraction\": \"the folding fraction is represented by the cumulative distribution function (CDF) derived from a given set of FPTs. To compute the CDF using an available dataset of FPTs\\u2014referred to in the paper as the ground truth CDF over a statistically reasonable number of 1000 available FPTs\\u2014the process is as follows: The RNA folding times are first arranged in ascending order. For each time point in this sorted list, the fraction of folding events completed by that time is determined by dividing the number of folding times less than or equal to that time by the total number of events. This yields cumulative proportions at each time point, providing a stepwise function that describes the progression of completed folding events over time.\"}",
"{\"title\": \"Revised version of the manuscript\", \"comment\": \"We are happy that we finally obtained simulation data for two more RNA types, a SAM Riboswitch and a microRNA, as requested by reviewer DNAm. We updated our manuscript accordingly, adding approximations for these RNAs to the appendix.\\n\\nWith kind regards\\n\\nThe Authors\"}",
"{\"title\": \"Response to Reviewer PRsD\", \"comment\": \"Dear Reviewer PRsD.\\n\\nThank you for your valuable Feedback. We address your questions and concerns in the following.\\n\\n>I would have liked to have seen Table 6 comparing all the different methods in the main paper. Plus, I would have liked to have seen comparison on MAE across methods, including in Table 1\\n\\nWe thank the reviewer for this useful comment and moved all results from Table 6 to the main paper. Additionally, we add MAE and Kolmogorov-Smirnov (KS) statistic results for the respective experiments in Appendix H.2 due to space limitations in the main body.\\n\\n>Reliance on multimodal Gaussians\\n\\nWe agree with the reviewer that different distributions might further improve KinPFNs performance. We think that we adequately address this limitation in the discussion on future work in Section 6. However, as the first deep learning approach for RNA folding kinetics, our results indicate that assuming a Gaussian distribution for first passage times is reasonable and already leads to strong performance. Nevertheless, we plan to explore the potential of alternative distributional assumptions in future work.\\n\\n>Reliance on only two metrics to measure accuracy of the metric.\\n\\nThe negative log-likelihood is commonly used to assess the performance of PFNs across different tasks and we think that it captures the performance of KinPFN arguably well. However, we are happy to include additional metrics that offer new insights or increase the understanding of strengths and weaknesses of our approach. For the specific case, we added MAE and KS to the analysis of the performance of KinPFN (results shown in Appendix H.2).\\n\\n>Why not show the comparison between the sequence length and FPT in Table 1? I think that might provide more insight as to where KinPFN works better over other methods unless sequence length isn't an important variable, which it seems to be since it's the subject of study in Figure 3\\n\\nWe are currently analyzing the influence of sequence length along with other RNA features for all predictions and will add the requested analysis to the appendix, as we think that Table 1 might be overloaded otherwise. The number of potential structural states increases exponentially with the sequence length of the RNA. The reviewer, therefore, is right that simulations for longer RNAs require substantially longer runtimes of the simulators and are particularly challenging. As a pure in-context learner, KinPFN\\u2019s performance, however, is independent of the sequence length (similar to fitting KDEs or GMMs), which is one of the major advantages of our approach because it offers substantial speed-ups specifically for the case of long RNAs. The analysis shown in Figure 3 was performed to confirm this independence claim for KinPFN.\\n\\n>Why not put the legend in Figure 5 in the appendix? I'm not going to be able to copy/paste that to check, anyway.\\n\\nWe included the three sequences in the figure legend to clarify that this experiment compares three different RNAs, each folding into the same secondary structure but undergoing distinctly different folding processes and efficiencies. We, therefore, disagree with the reviewer that the sequences provided in the legend of Figure 5 are invaluable for understanding the results and would like to keep the respective figure as it is.\\n\\n>Why not use the KS test between CDFs as another comparison? This would help capture maximum discrepancy and add nuance to the experimental analysis.\\n\\nWe thank the reviewer for this helpful suggestion and we add the KS statistic results to Appendix H.2 in the revised version of our manuscript.\\n\\nWe hope that we addressed all your questions and would like to thank the reviewer again for the valuable feedback. We are happy to answer further questions. If all your concerns have been addressed in the response, we would like to kindly ask you to increase our score.\\n\\nWith kind regards,\\n\\nThe authors.\"}",
"{\"title\": \"Author clarifications\", \"comment\": \"Dear Reviewer FZT1,\\n\\nWe acknowledge the fruitful discussion and thank you again for the response.\\n\\nAgain, please see below for our detailed responses.\\n\\n>Previously you showed evidence that KinPFN outperforms GMMs with fixed k. As I understood, you affirmed my hypothesis that \\nKinPFN outperforms GMMs because it does full Bayesian inference over the number of components. \\n\\nWe agree that \\u2018standard\\u2019 GMMs might perform badly on the data due to their limitation to a fixed component.\"}",
"{\"title\": \"Comment\", \"comment\": \"Ok, it seems using synthetic data is not a good idea.\\n\\nI'm a little confused by the response to the second question. I'm not suggesting running MCMC on the RNA structure, just on the fit of the multimodal Gaussian data. Fitting these types of models is extremely standard and there is a very large amount of work on them. If they can be applied to the data then it seems they should be cited and compared to PFNs. Could the authors clarify this?\\n\\nFor the last question, I would expect Gaussian mixture models to handle large amounts of data just fine.\"}",
"{\"summary\": \"The authors propose a novel method to approximate RNA first passage times (FPTs) using Prior-data fitted networks (PFNs), which are transformers that return a posterior-predictive distribution subject to simulated draws from a prior dataset. In this case, the prior dastset is created from a synthetic prior. The authors put a great deal of effort into hand-crafting this prior (e.g. using biological prior knowledge) that will then be used to train their PFN. The result is a prior that is able to train a PFN that predicts FPT better than competing methods, such as KDE, GMM, and DP-GMMs. The paper's biological motivation is sound and the achievement would help researchers in this field. However, the paper has room for improvement to be published at ICLR instead of a computational biology journal. Why not make the prior adaptive based on the data? There is opportunity to solve this problem that wouldn't require hand-crafting a new prior, or, make it more general for different datasets that would make it an _excellent_ contribution.\\n\\nThe paper's prose is clear. I do think that some of the tables could be rearranged to give the reader more clear insights into how the method works, which I relay below.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Concrete biological problem of predicting FPTs that was well-motivated and explained.\", \"Clarity in explanation of method used.\", \"Clear advantage on benchmarks compared to other current solutions.\", \"Dramatic decrease in folding time.\", \"Demonstrated applicability to problems that also exhibit FPT characteristics.\"], \"weaknesses\": [\"I would have liked to have seen Table 6 comparing all the different methods in the main paper. Plus, I would have liked to have seen comparison on MAE across methods, including in Table 1.\", \"Reliance on mulitmodal Gaussians for the prior.\", \"Reliance on only two metrics to measure accuracy of the metric.\"], \"questions\": [\"Why not show the comparison between the sequence length and FPT in Table 1? I think that might provide more insight as to where KinPFN works better over other methods, unless sequence length isn't an important variable, which it seems to be since it's the subject of study in Figure 3.\", \"Why not put the legend in Figure 5 in the appendix? I'm not going to be able to copy/paste that to check, anyways.\", \"Why not use the KS test between CDFs as another comparison? This would help capture maximum discrepancy and add nuance to the experimental analysis.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"New revision of manuscript\", \"comment\": \"We update our manuscript to include the requested changes of the reviewers.\\n\\nAll changes in the text are highlighted in red. \\n\\nWe note that we are currently above the ten pages page limit due to the additional text and Figures. \\n\\nWe will ensure this limit for a potential CRC by moving some of the floats and text to the appendix if necessary.\\n\\nBest regards,\\n\\nThe authors\"}",
"{\"title\": \"Comment\", \"comment\": \"Thank you for the productive conversation so far! My sense is that the main contribution of this paper is that it suggests fitting a 3 component Gaussian mixture model to a one-dimensional summary statistic from RNA kinetics simulation. While potentially useful and interesting to RNA biologists, I don't believe this represents a significant machine learning contribution. I've adjusted my score to reflect this. On the other hand, despite the abundance of methods to fit these types of models (which also could have been used to tune hyperparameters), the strategy of this paper is to approximate this inference by pre-training an LLM, as suggested by previous works; while this strategy has its pros and cons, it doesn't rise to the level of a fundamental machine learning contribution.\"}",
"{\"title\": \"Author response\", \"comment\": \"Dear Reviewer FZT1\\n\\nThanks for the fast response. Please find the details below.\\n\\n>Thanks for the response! My statement about the synthetic data was only to say that you've convinced me that my suggestion of training on synthetic data to build a prior was not a sound idea.\\n\\nWe thank the reviewer for the clarification.\\n\\n>WRT the comparison to GMMs I appreciate the authors including tables 7-9. As I understand however, KinPFN was trained to model a GMM prior, so why does it outperform them? Is it because the number of components is variable?\\n\\nWithout having investigated this in detail, we would also speculate that the reason is the variable number of components. This is further supported by the better performance of DP-GMMs and KDEs. \\n\\n>If you built a GMM with a prior over the number of components identical to that of KinPFN then how would it perform.\\n\\nKinPFN is trained for 2-5 modes. Therefore, we evaluate GMM baselines with 2-5 components. Similarly, we set the upper bound for DP-GMMs to 2-5 for all comparisons. Our results thus indicate that the performance is worse for both DP-GMM and GMMs when using the same setup.\\n\\nWith kind regards,\\n\\nThe Authors\"}",
"{\"title\": \"Response to Reviewer TUhM continued\", \"comment\": \">Moreover, since the models currently use Kinfold simulations, it could be limited by the accuracy of the NN energy model, which has limitations on longer RNAs. Maybe KinPFN offers new perspectives to deal with this challenge?\\n\\nWe agree with the reviewer that KinPFN offers new perspectives to deal with limitations of the underlying simulators, particularly those limitations that are connected to sequence length which requires substantially longer simulations. Since KinPFN is a pure in-context learner with the simulation's first passage times as context, we cannot improve the accuracy of the NN energy model. However, we can significantly reduce the time required to get accurate approximations, completely independent of the sequence length. With our experiments across different sequence lengths (Section 5.1), we took a first approach in this direction, however, the experiment is limited to RNAs with a length of < 150nt due to the very long runtimes of the simulators.\\n\\nIn addition, we would like to emphasize that we also use Kfold simulations (see Section 5.1) and achieve similar performance. This indicates that KinPFN\\u2019s predictions are independent of the underlying data-generating simulator but only depend on the provided context. We, therefore, expect KinPFN to work similarly well across different simulators which do not necessarily have to be secondary structure-based. This, however, is currently not empirically validated and requires further evaluations and larger-scale data acquisition. \\n\\nDoes this answer the reviewer's question?\\n\\n>The performance of KDEs seem very close to KinPFN, and I wonder if the author(s) could provide more context in the main body to discuss the significance of the results (e.g., from supp mat H.2)\\n\\nWe agree with the reviewer that, while outperformed by KinPFN, the performance of KDEs appears close to KinPFN. Nevertheless, we think that KinPFN has several advantages over KDEs, most obviously, it is not limited to multi-modal Gaussians for the formulation of the prior. As already stated by the reviewer, KinPFN is an early pioneering approach and there is still room for improvement. However, we will likely explore further applications of KinPFN for RNA kinetics in the future, including different distributions for the prior formulation. Regarding the explicit results of KDEs, we add a more comprehensive analysis including different metrics to Appendix H.2 and update the discussion of the results in the main body.\\n\\n>It may be out of the scope of this manuscript, but I would be interested to understand better how other features such as the nucleotide composition, MFE value, number of base pairs or loops, etc. impact the results\\n\\nWe are preparing the requested analysis and will add it to the Appendix.\\n\\nHowever, we do not expect substantial changes in KinPFN\\u2019s ability to predict FPTs for sequences that have particularly low MFEs or exceptionally high GC contents. RNA structure is highly dependent on the sequence context but we do not expect any bias toward a particular shape of the FPT distributions as a result of the sequence/structure traits mentioned by the reviewer. While simple, our prior seems to cover a broad range of possible FPT distributions, and thus these traits should not have a strong influence on KinPFN\\u2019s prediction quality. Nevertheless, we are also curious to see the aforementioned results and will share them here when available.\\n\\n>How does KinPFN perform on multi-stable RNAs\\n\\nMulti-stable RNAs would typically produce CDF data with one or more plateaus, depending on the modalities of the FPT distribution. Since we train KinPFN with up to five modalities, the behavior of multi-stable RNAs is generally captured by our prior. Given a set of context first-passage times of a multi-stable RNA, we thus expect KinPFN to achieve similar performance as demonstrated in the paper.\\n\\nWe hope that our responses clarified all the questions of the reviewer and we are happy to answer further questions if necessary. If all your concerns were addressed, we would be very thankful if you would increase your score.\\n\\nWith kind regards,\\n\\nThe authors\"}",
"{\"title\": \"Author clarifications continued\", \"comment\": \">The alternative point you made with the Adriaensen paper -- that in some cases the approximation does better than the exact procedure -- is I'm not sure a sound strategy for building a model. For example, if you fit a more flexible model with more compute then you would expect it to model the prior better and therefore do worse.\\n\\nWe think that this mainly depends on the chosen prior. PFNs perform strongly in scenarios where the posterior can be effectively represented by the prior distribution, even when the prior is not an exact match for the data. The strength of PFNs lies in their ability to generalize across distributions during training, leveraging their flexibility to approximate the posterior well without overfitting to specific prior instances.\\n\\nWhile we acknowledge that our current multi-modal Gaussian prior may not perfectly represent the true posterior of FPTs, it serves as a reasonable approximation for evaluating PFN performance. We think that this is also substantiated by our empirical results, where KinPFN outperforms the other approaches.\\n\\nWe hope we addressed all the questions of the reviewer. We thank you again for the useful comments and questions that helped us improve our manuscript. If you have any further questions, we are happy to answer them. \\n\\nWith kind regards,\\n\\nThe Authors\"}",
"{\"title\": \"Author response to Reviewer DNAm\", \"comment\": \"We thank the reviewer for acknowledging our efforts and for increasing our score. If there are any further questions, we are happy to answer them.\\n\\nWith kind regards,\\n\\nThe authors\"}",
"{\"comment\": \"I acknowledge the efforts made to address concerns raised in my initial review. The clarifications provided have improved my understanding of the work. I have decided to revise my score to reflect the progress made.\"}",
"{\"title\": \"Comment\", \"comment\": \"I\\u2019m not sure the authors and I are in the same page. I\\u2019m claiming: kinPFN is trained to approximate a GMM prior. Therefore, if KinPFN performs this approximation well, then it will behave exactly as the GMM prior when simulated with MCMC. I\\u2019m unsure if you\\u2019re suggesting that somehow the model will generalize in a different way when the prior is misspecified because of the properties of ICL, but I don\\u2019t think this is likely: KinPFN should behave just like a GMM would if your conviction of it\\u2019s being a good approximation is true.\"}",
"{\"title\": \"Comment\", \"comment\": \"Thanks for clarifying. What I would like to understand to decide on my score is basically: if it's easy to directly model full Bayesian inference with the prior of interest, why should we pursue approximate inference with a PFN? Your baselines have a fixed number of components but it's not hard to fit 3 GMMs for k=3, 4, 5 and then weight the models according to their marginal likelihoods. What I'm worried about in other words is that your method approximates something we can model directly. This is why I asked about using real data that couldn't be modeled so easily. I would really appreciate your clarifying this!\\n\\nThe alternative point you made with the Adriaensen paper -- that in some cases the approximation does better than the exact procedure -- is I'm not sure a sound strategy for building a model. For example, if you fit a more flexible model with more compute then you would expect it to model the prior better and therefore do worse.\"}",
"{\"summary\": \"Biomolecules adopt many conformation in situ and understanding the kinetics of the transitions between these conformations is useful for understanding their biophysical behavior. Expensive simulations can start at one conformation and measure how long it takes to transition to another conformation -- the passage times. To minimize the number of simulations needed to characterize the distribution of first passage times, the authors build a prior on the distribution of first passage times; this in principle allows them to do more efficient inference with fewer measurements.\\n\\nThe authors focus on RNA kinetics and expression data. They build their prior by fitting a language model on sequences of passage times. They demonstrate they fit data better than Dirichlet process and kernel density estimator baselines and have reasonable fits on real data.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper is sound, and has extensive experimental validation.\", \"weaknesses\": \"See questions.\", \"questions\": \"1. Why not train on real passage time data that you can simulate? Why synthetic?\\n\\n2. The prior described by the synthetic data is so simple it may be easy to just run MCMC. Why use a language model at all?\\n\\n3. Language models as priors can have some pathologies. How does the model behave as the amount of data becomes large? In figure 11 it looks like even with a lot of data the PFN doesn't converge to the true CDF.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response to Reviewer TUhM\", \"comment\": \"Dear Reviewer TUhM,\\n\\nThank you for your valuable feedback and for highlighting the soundness and novelty of our approach. In the following, we provide answers to your questions and address your concerns in detail.\\n\\n>Is there any further conceptual novelty in this submission that I might be missing?\", \"we_would_like_to_further_emphasize_an_additional_unique_aspect_of_our_method\": \"its ability to operate PFNs without requiring quantile information. Unlike traditional PFN approaches that predict the PPD of a target y for a given quantile x conditional on a dataset D, our method learns the entire PPD of y (in this case, the first passage times) without knowledge about the quantiles (x is always a zero-vector), effectively representing the absence of further information.\\n\\nThis approach is motivated by the underlying problem structure, where we do not have access to the true quantiles of the context first-passage times. Therefore, we cannot treat the task as a standard regression problem but directly learn the PPD of first passage times conditional on a (data)set of context first passage times but without requiring quantile information which is novel in the field of PFNs.\\n\\nWe add a small paragraph to the end of the background section to explicitly mark this change.\\n\\n>As far as I understand it, the structural model is based on secondary structures and uses (for simulations) the nearest neighbor (NN) energy model. The application to secondary structure is only partially described in the paper. It is an important limitation, and I am concerned that readers may miss this information. I would suggest updating Fig. 2 (reproduced from Muller et al, 2022) to include more details on the sampling (e.g., using a secondary structure model and Kinfold) and eventually on the benchmark too. This is just a suggestion as the author(s) may prefer to include this information at other places of the manuscript (e.g., expand the background section to describe the folding model).\\n\\nWe agree with the reviewer that we do not discuss the secondary structure aspects in detail in the initial manuscript. However, we design KinPFN to be independent of the simulator, and therefore also of the underlying secondary structure folding model. While we do not want to speculate about it without empirical evidence, we think that KinPFN as a pure in-context learner would also be applicable to very different simulation data. In this regard, our experiments with Kfold simulations, different start and stop structures (both Section 5.1), as well as the application to gene expression data (Section 5.4) could be seen as the first evidence.\\n\\nDoes this clarify the reviewers' concerns?\\n\\n>A claim of the paper is to dramatically accelerate folding simulation. I agree this is a nice feature, but I also wonder to what extent it is currently a major bottleneck (for biologists) or what new applications it will enable. I think the manuscript could benefit from further justifications/motivations For instance, the usefulness of KinPFN for design applications sounds very promising, and I would have liked more discussion (or experiments?) related to this topic.\\n\\nWe agree with the reviewer that the speed-ups achieved by KinPFN are a major contribution of our work. These accelerations indeed offer the opportunity for novel applications, in particular but not limited to, the field of RNA design. However, while we also agree that further assessment of this strength would be very interesting, we also think that the development of a kinetic RNA design algorithm (and the required experiments) is out of the scope of this work. That said, a different application of KinPFN is shown in the case study on the folding efficiency of different RNAs in Section 5.3. In this regard, KinPFN could e.g. be used to identify switching states in RNAs. Nevertheless, we agree with the reviewer that we could discuss the potential benefits of our approach more in our manuscript and we add a short section to the Introduction of the revised version of the paper.\"}",
"{\"title\": \"Author clarifications continued\", \"comment\": \">Now you've done more experiments that show more data that KinPFN outperforms even a mixture. I thank you for performing these experiments but here is my issue: KinPFN is trained to approximate a mixture of GMMs, and presumably, if trained long enough with a large enough architecture, would perform identically to them; therefore, if KinPFN outperforms the mixture of GMMs, it is because it has failed to accurately fit the data. This is a little suspect to me as a sound modeling paradigm because 1) it's unclear how to turn this into principles for designing PFN architectures and training these models if you're actively trying to avoid fitting the training data, and 2) it is likely that this mis-fitting will manifest in pathologies in other regimens, say large N.\\n\\nWe think the Reviewer\\u2019s conclusion that KinPFN failed to accurately fit the prior and therefore achieves good performance on real simulation data is not the only interpretation of our results and further does **not** align very well with our approach nor our empirical results. We would like to challenge this view with a different interpretation that aligns much better with our setup, recent observations, and our empirical results by addressing the following three questions:\\n\\n1. Does KinPFN accurately fit the prior?\\n2. What can cause the worse performance of the GMM ensemble on samples drawn from the prior?\\n3. What explains the strong performance of KinPFN on the real simulation data?\\n\\n### 1. Does KinPFN accurately fit the prior?\\n\\nWe strongly disagree with the reviewer that KinPFN is **not** trained to fit the prior but to generalize better to new distributions. While we have to admit that HPO was performed on the newly introduced validation set of real simulation data, we did not use the validation performance for early stopping our training. Rather, we train KinPFN to \\u2018convergence\\u2019, or better, until the NLL varies only slightly within a given epsilon due to the infinite nature of the data. This also means that we have learned representations of the contexts whose combination results in minimizing the KL between the model prediction and the prior sufficiently since we are using NLL in the infinite data regimen (for a proof see [1]; https://openreview.net/forum?id=KSugKcbNf9 ). Summarizing, we fit the prior well before continuing with further experiments.\\n\\nThis is also evidenced by the strong empirical results of KinPFN on the examples drawn from the prior and we account for the influence of different contexts by reporting the mean and the standard-deviation across 20 different context inputs for all approximations shown in the manuscript.\\n\\nThat being said, we were wondering if we aim at training KinPFN to not accurately fit the prior, wouldn\\u2019t that result in worse performance on samples drawn from the prior compared to a GMM ensemble? This is not the case and thus, we think that the interpretation of the reviewer is not well aligned with our empirical results. \\n\\nHowever, we cannot assume that KL is zero for all inputs because we can only observe a small fraction of all data even when training forever using a larger model. Thus, even if the prior is a mixture of GMMs, we cannot assume that endless training on it would result in exact predictions that match the predictions of a GMM ensemble.\\n\\nThis leads to the question why the GMM ensemble is performing worse than KinPFN on samples drawn from the prior?\\n\\n### 2. What can cause the worse performance of the GMM ensemble on samples drawn from the prior?\\n\\nWhile we agree that we implement a parameterized multi-modal Gaussian prior distribution, it is possible that the prior might not always be optimal for approximations with a GMM since we did not carefully construct the prior to work well with GMMs but to learn reasonable representations in the PFN. For instance, the synthetic prior may involve overlapping or poorly separated components, making it challenging for GMMs to assign weights and means correctly. In contrast, KinPFN does not directly rely on GMM components or explicit parameterization of the prior during inference. \\n\\nIn addition, while the ensemble approach improves flexibility by using multiple component GMMs and weighting them by marginal likelihoods, it still requires explicit marginalization and proper model selection which might not be optimal and could require tuning.\"}",
"{\"title\": \"Response to Reviewer FZT1 continued\", \"comment\": \"We thank the reviewer for the helpful feedback. If there are any further questions or any clarification needed, we are happy to answer those. If not, we would appreciate it if the reviewer could increase our score.\\n\\nWith kind regards,\\n\\nThe authors\"}",
"{\"title\": \"Author response to Reviewer FZT1\", \"comment\": \"Dear Reviewer FZT1,\\n\\nPlease see our responses below.\\n\\n>Ok, it seems using synthetic data is not a good idea.\\n\\nWe do not really understand the conclusion drawn by the reviewer and do not agree that training on synthetic data is not a good idea. We would like to kindly ask the reviewer what this conclusion is based on?\\n\\n>I'm a little confused by the response to the second question. I'm not suggesting running MCMC on the RNA structure, just on the fit of the multimodal Gaussian data. Fitting these types of models is extremely standard and there is a very large amount of work on them. If they can be applied to the data then it seems they should be cited and compared to PFNs. Could the authors clarify this?\\n\\nWe are sorry about the reviewer\\u2019s confusion due to our response. \\n\\nWe chose PFNs over MCMC due to the strong performance of PFNs compared to MCMC in the task of learning curve extrapolation shown in [1]. \\n\\n[1] Adriaensen, S., Rakotoarison, H., M\\u00fcller, S., & Hutter, F. (2024). Efficient bayesian learning curve extrapolation using prior-data fitted networks. Advances in Neural Information Processing Systems, 36.\\n\\n>For the last question, I would expect Gaussian mixture models to handle large amounts of data just fine.\\n\\nWe add the analysis of the different other methods to Tables 8, 9, and 10 in the revised version of our manuscript. Indeed, it seems like GMMs can handle large contexts quite well, showing the best performance in terms of NLL, however, KinPFN outperforms all the other methods in terms of MAE and KS also for the large context sizes.\\n\\nWith kind regards,\\n\\nThe Authors\"}",
"{\"summary\": \"The paper introduces a model for predicting RNA folding dynamics called KinPFN. Specifically, this work focuses on computing the time needed for an RNA to fold into a specific structure (i.e., the first passage time). KinPFN uses a prior-data fitted network (PFN) that is calibrated using a sample of synthetic first passage time to predict entire cumulative distribution or RNA structures. The methodology is benchmarked on random and real sequences, and the paper concludes with an illustration of an application to gene expression.\\n\\nOverall, the methodology is sound, and the results are convincing. The development of RNA folding dynamics prediction tools is timely and only a limited set of programs are currently available. Furthermore, machine learning approaches seem suited to this task, and, to my knowledge, this contribution is the first of its kind. The contributed approach is still a proof-of-concept, but it provides solid ground for further exploration of ML. The manuscript is clear and well written, yet it could benefit of clarifications suggested below.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Overall, it is a nice paper from a bioinformatics perspective. The machine learning component is a bit limited but I believe fits the broad definition of originality and significance through \\\"application to a new domain.\\\" The authors basically apply one existing framework (prior-data filtering networks or PFNs) to model RNA folding dynamics. The innovative aspect of this work is thus rather the application to RNA folding. The research is timely, and the gain in speed could lead to interesting RNA design applications. The authors mention it but do not discuss it much. However, even if it is still very early, there are interesting/promising applications at the end of the paper. It is a clean and solid piece of work from which could emerge highly useful applications.\", \"weaknesses\": \"\\u2022\\tAs far as I understand it, the structural model is based on secondary structures and uses (for simulations) the nearest neighbor (NN) energy model. The application to secondary structure is only partially described in the paper. It is an important limitation, and I am concerned that readers may miss this information. I would suggest updating Fig. 2 (reproduced from Muller et al, 2022) to include more details on the sampling (e.g., using a secondary structure model and kinfold) and eventually on the benchmark too. This is just a suggestion as the author(s) may prefer to include this information at other places of the manuscript (e.g., expand the background section to describe the folding model).\\n\\u2022\\tA claim of the paper is to drastically accelerate folding simulation. I agree this is a nice feature, but I also wonder to what extent it is currently a major bottleneck (for biologists) or what new applications it will enable. I think the manuscript could benefit from further justifications/motivations. For instance, the usefulness of KinPFN for design applications sounds very promising, and I would have liked more discussion (or experiments?) related to this topic. Moreover, since the models currently use kinfold simulations, it could be limited by the accuracy of the NN energy model, which has limitations on longer RNAs. Maybe KinPFN offers new perspectives to deal with this challenge?\\n\\u2022\\tIt is not clear to me that PFNs perform much better than other options (e.g., KDEs) but at least they seem to be suited to the task. Table 1 compares KinPFN to DP-GMM and KDE models. The performance of KDEs seem very close to KingPFN, and I wonder if the author(s) could provide more context in the main body to discuss the significance of the results (e.g., from supp mat H.2)?\\n\\u2022\\tFig. 3a shows a distribution of the performance across RNAs of various lengths. It may be out of the scope of this manuscript, but I would be interested to understand better how other features such as the nucleotide composition, MFE value, number of base pairs or loops, etc. impact the results. Also, how does KinPFN perform on multi-stable RNAs?\", \"questions\": \"Is there any further conceptual novelty in this submission that I might be missing?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Author response\", \"comment\": \"Dear Reviewer FZT1,\\n\\nAgain, we would like to thank you for your quick response. We will detail our thoughts on your reply in the following.\\n\\n>Thank you for the productive conversation so far! My sense is that the main contribution of this paper is that it suggests fitting a 3 component Gaussian mixture model to a one-dimensional summary statistic from RNA kinetics simulation. While potentially useful and interesting to RNA biologists, I don't believe this represents a significant machine learning contribution. I've adjusted my score to reflect this. On the other hand, despite the abundance of methods to fit these types of models (which also could have been used to tune hyperparameters), the strategy of this paper is to approximate this inference by pre-training an LLM, as suggested by previous works; while this strategy has its pros and cons, it doesn't rise to the level of a fundamental machine learning contribution.\\n\\nWe respectfully disagree with the reviewer's description of our approach. To our knowledge, applications to physical sciences (including chemistry and biology) are explicitly listed as a relevant topic in the ICLR call for papers at https://iclr.cc/Conferences/2025/CallForPapers and our approach describes a significant interdisciplinary contribution to an interesting application from the field of biology. In addition, we do not fit a 3 component Gaussian mixture to the problem of RNA first passage time distribution approximation but develop a new strategy that allows training a prior-data fitted network (PFN) on synthetic first passage time (FPT) distributions without the need for quantile information. Also, the model is not an LLM (with 4.8M parameters not large and no language involved at all). However, the resulting model, KinPFN, is the first deep learning model in the field of RNA folding kinetics analysis. This is a novel contribution not only to the field of PFNs but also to the field of RNA kinetics. However, while we agree that our new prior is arguably simple and describes distributions of parameterized multi-modal Gaussians, our empirical analysis (see also previous response) further indicates that KinPFN learns to approximate OOD data better than GMMs and ensembles of these, resulting in superior performance. Our training on synthetic data for a biological problem further showcases that it is possible to model biological processes even in the absence of real-world examples. We, therefore, think that our new strategy for training and inferring PFNs in the absence of quantile information, the novelty and relevance of the topic, as well as the strong approximation performance which results in massive speed-ups, are reasonable contributions of an application paper at ICLR. \\n\\nIn addition, while we would call KinPFN a pioneering work in the field, it shows a large potential to create a substantial impact in an important application area. Since the prior is not limited to a multi-modal Gaussian, there is no need to retrain or refit KinPFN for applications in online learning settings such as kinetic RNA design, and since KinPFN performs stable without further hyperparameters during inference, our method shows several clear advantages over simply fitting a 3 component Gaussian mixture model to the problem.\\n\\nWith kind regards,\\n\\nThe Authors\"}",
"{\"title\": \"Author clarifications continued\", \"comment\": \">Also, excuse me for using the term \\\"LLM\\\". I accidentally used it as a metonym for a transformer.\\n\\nWe thank the reviewer for the excuse. We mentioned this minor issue as we think the term LLM is quite biased these days and might lead to misunderstandings.\\n\\n>Finally, I appreciate the importance of interdisciplinary research and I also appreciate the authors working on solving problems in RNA kinetics. However, fitting simple, classical Bayesian models using packages like STAN is a staple of modern computational biology research; these models are regularly fit to microscopy, health, spectroscopy, and all sorts of other data. Therefore, I don't find approximately fitting a 4 component mixture of GMMs to one dimensional data to be a novel machine learning contribution.\\n\\nSince KinPFN is the first model in the field, we would be very happy if future approaches challenge our results with different approaches. However, we think that using a PFN for our problem is also a valid approach for the reasons described above.\\n\\n[1] M\\u00fcller, S., Hollmann, N., Arango, S. P., Grabocka, J., & Hutter, F. Transformers Can Do Bayesian Inference. In International Conference on Learning Representations.\\n\\n[2] Singh, A., Chan, S., Moskovitz, T., Grant, E., Saxe, A., & Hill, F. (2024). The transient nature of emergent in-context learning in transformers. Advances in Neural Information Processing Systems, 36.\\n\\nWe thank the reviewer for the helpful and interesting discussion that helps us improve our manuscript. We hope that our explanations help to clarify the Reviewer\\u2019s concerns and questions. However, we are happy to further discuss the advantages of ICL with PFNs for the problem at hand if necessary. If this is not the case, we would like to kindly ask the Reviewer to consider updating our score.\\n\\nWith kind regards,\\n\\nThe Authors\"}",
"{\"title\": \"Author clarifications\", \"comment\": \"Dear Reviewer FZT1,\\n\\nWe thank you for your fast responses. Please see our clarifications below.\\n\\n>Thanks for clarifying. What I would like to understand to decide on my score is basically: if it's easy to directly model full Bayesian inference with the prior of interest, why should we pursue approximate inference with a PFN? Your baselines have a fixed number of components but it's not hard to fit 3 GMMs for k=3, 4, 5 and then weight the models according to their marginal likelihoods. What I'm worried about in other words is that your method approximates something we can model directly. This is why I asked about using real data that couldn't be modeled so easily. I would really appreciate your clarifying this!\\n\\nWe thank the reviewer for this interesting and important question.\\n\\nSince we cannot assume that the distribution of first passage times can be fully represented as a multi-modal Gaussian, it is likely that the GMM cannot infer it exactly. However, we agree that an ensemble of GMMs for different components (weighted according to their marginal likelihoods) could improve flexibility and performance. We thus implement it and compare KinPFN to the GMM ensemble on the testset of 635 randomly generated sequences with KinFold simulations. The results are shown below.\\n\\nEnsemble for components k = 2,3,4,5:\\n\\n| Context Size | Model | MAE | NLL |\\n| -------------- | -------- | ------ | ------- |\\n| 10 | KinPFN | **0.084** | **1.374** |\\n| | GMM Ensemble | 0.093 | 6.417 |\\n| 25 | KinPFN | **0.056** | **1.244** | \\n| | GMM Ensemble | 0.084 | 1.917 |\\n| 50 | KinPFN | **0.039** | **1.205** |\\n| | GMM Ensemble | 0.081 | 1.389 | \\n| 75 | KinPFN | **0.033** | **1.192** | \\n| | GMM Ensemble | 0.080 | 1.261 |\\n| 100 | KinPFN | **0.030** | **1.186** |\\n| | GMM Ensemble | 0.078 | 1.218 |\\n\\nFor components k = 2,3,4:\\n\\n| Context Size | Model | MAE | NLL |\\n| ------------- | -------- | ------ | ------ |\\n| 10 | KinPFN | **0.084** | **1.374** |\\n| | GMM Ensemble | 0.095 | 4.270 |\\n| 25 | KinPFN | **0.056** | **1.244** | \\n| | GMM Ensemble | 0.086 | 1.639 |\\n| 50 | KinPFN | **0.039** | **1.205** |\\n| | GMM Ensemble | 0.081 | 1.312 | \\n| 75 | KinPFN | **0.0333** | **1.192** | \\n| | GMM Ensemble | 0.080 | 1.228 |\\n| 100 | KinPFN | **0.030** | **1.186** |\\n| | GMM Ensemble | 0.078 | 1.202 |\\n\\nKinPFN clearly outperforms both ensemble variants across all context sizes. \\n\\nWe also evaluate a GMM ensemble (k = 2,3,4,5 matching the KinPFN training modes) on 10,000 samples directly from the prior:\\n\\n| Context Size | Model | MAE | NLL |\\n|----------------|-----------------------------|-----------------------------|----------------|\\n| 10 | KinPFN | **0.088** | **2.427** | \\n| | GMM Ensemble | 0.103 \\t| 7.386 |\\n| 25 | KinPFN | **0.055** | **2.136** | \\n| | GMM Ensemble | 0.0714 \\t| 2.732 \\t|\\n| 50 | KinPFN | **0.039** | **2.060** | \\n| | GMM Ensemble \\t| 0.062 \\t| 2.204 \\t|\\n| 75 | KinPFN | **0.032** | **2.039** | \\n| | GMM Ensemble \\t| 0.059 \\t| 2.100 \\t|\\n| 100 | KinPFN | **0.028** | **2.028** |\\n| | GMM Ensemble \\t| 0.057 \\t| 2.061 \\t|\", \"we_think_that_these_results_allow_for_two_conclusions\": \"(1) It doesn\\u2019t seem to be the case that an ensemble of GMMs can approximate the data similarly well as KinPFN and (2) KinPFN does not only mimic the behavior of GMMs. The reason might be that KinPFN does not directly rely on GMM components or explicit parameterization of the prior during inference. Instead, it is trained on a distribution of multi-modal priors and might learn to generalize beyond individual instances of these priors. This could allow KinPFN to better capture subtle features of the synthetic prior, such as complex dependencies between modes or variations in the structure of the modes during training. In essence, KinPFN learns a more effective representation of the prior during training and therefore generalizes better to the real-world data.\\n\\nThat being said, we would like to emphasize some more advantages of KinPFN over GMMs:\\n\\n- In contrast to GMMs, KinPFN is not limited to multi-modal Gaussian distributions but can be trained on different prior distributions or even mixtures of them.\\n- There is no need to retrain or refit KinPFN and there is only a single forward pass required to approximate a given distribution. This is particularly important for applications such as kinetic RNA design, where KinPFN could be used on top of an oracle (e.g. KinFold).\\n- KinPFN performs stable and does not require hyperparameters at test time.\\n\\nWe hope this clarifies the reviewer\\u2019s question.\"}",
"{\"title\": \"Author clarifications continued\", \"comment\": \">Claims that KinPFN could be trained to fit other priors seem to have worked out more poorly that fitting the synthetic data -- you mentioned that the performance was worse when fit to simulation data. For this reason, I don't think this potential flexibility of the PFN should count as an advantage.\\n\\nAs mentioned before, we tried training KinPFN on simulation data for relatively short sequences (up to 30 nucleotides) since simulations for longer sequences are infeasible due to long runtimes. As also mentioned in the same response, the conformation space grows exponentially with the sequence length and we, therefore, expect that the real-world prior does not fit the true posterior very well. Furthermore, we are moving away from the infinite data regime back to a setting where the training results are bound by data availability rather than being compute bound as in the infinite data setting. We thus have to avoid overfitting and might have to use strong regularization to achieve good results. We did not further follow up on this approach because the data is not available at scale. However, we do not think that our bad preliminary results on real simulation data are related to not fitting the distribution but rather they are mainly due to limited data availability. \\n\\nWe, therefore, still think that a synthetic prior that fits the true posterior better than the current one, e.g. by using other distributions than multi-modal Gaussians, could still lead to performance improvements. However, there has to be support for good approximations of the posterior in the prior as mentioned before.\\n\\n>On the other hand, the Adriaensen paper attempted to fit a prior that was much harder to approximate via MCMC; in this case they suggested that a PFN might be a good alternative and showed that in their experiment in their Fig 3 -- they in particular showed that the failure of MCMC was that even with many many samples, the posterior could not be fit. As I understand, this paper makes a different argument: not that MCMC is hard for this data, but that KinPFN generalizes in a useful way when not trained to exactly match; this is a more suspect argument for the reasons outlined above.\\n\\nAs outlined in detail above, we do not agree that KinPFN is trained to not fit the prior. However, we agree that we think that KinPFN can generalize to new distributions.\\n\\n>Another point the authors made along the lines of the argument of the Adriaensen paper was that KinPFN is fast while MCMC is time consuming. STAN I believe can fit these models on the order of a minute and there are faster GPU-accelerated libraries as well. For this to could as a substantial contribution, I believe the authors should argue that reducing the fitting time of an RNA kinetics curve down from a minute is useful; given that the kinetics data is not super abundant, what is enabled by being able to analyze these data faster than a minute?\\n\\nWe agree with the Reviewer that this might not be a major advantage of KinPFN over these methods in our case. However, we also do not see a good reason to use an alternative approach to PFNs when we can assume that the methods will be orders of magnitudes slower (as indicated by the runtime of 1 minute mentioned by the reviewer compared to a single forward pass) before having run a single experiment. We, therefore, think it is still valid to prioritize a PFN approach over other solutions (which also have never been used to speed-up RNA kinetics simulations before!) for the development of a method for an application where there is no previous knowledge available.\"}",
"{\"summary\": \"This paper studies the RNA folding kinetics modeling problem, which is helpful for understanding RNA behavior and designing RNA. The authors propose to apply a deep learning method based on a prior-data fitted network to quickly estimate the distribution of RNA folding times. Experiments in synthetic datasets and real examples demonstrate the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis paper is the first to use deep learning methods to study RNA folding kinetics modeling, an important problem in RNA biology.\\n2.\\tOn synthetic datasets, the proposed method demonstrates superior performance compared to traditional approaches such as kernel density estimation and Gaussian mixture models.\\n3.\\tThe proposed method has an advantage in running speed.\\n4.\\tThe paper provides comprehensive details on dataset construction and model training, ensuring high reproducibility.\", \"weaknesses\": \"1.\\tThe current writing does not facilitate quick comprehension of the research problem for readers from diverse backgrounds. It would be beneficial if the author could include a figure illustrating specific data and formalization when introducing the problem. For instance, depicting the relationship between the RNA folding process and the corresponding change in folding fraction could enhance clarity.\\n2.\\tThe unique challenges RNA folding kinetics pose are not adequately summarized in the introduction. Additionally, the paper directly employs prior-data fitted networks to model the CDF without additional enhancements. Highlighting the improvements made to address the specific issues in this field would enhance the paper's contribution.\\n3.\\tIt would be clearer to explicitly state in the introduction or background section whether the paper focuses on RNA's tertiary or secondary structure, and how the folding ratio is calculated.\\n4.\\tCompared to the dynamic changes in RNA secondary or tertiary structures, the folding ratio provides very coarse-grained information about RNA folding dynamics, which seems still far from practical applications. The paper needs to further elucidate how this study can contribute to solving RNA biology problems.\\n5.\\tIn the experimental section, the results from Kinfold are used for validation, but the inherent error of Kinfold needs rigorous demonstration, which diminishes the persuasiveness of the results. Is it possible to use collected or published wet lab data for the evaluation of this problem?\\n6.\\ttRNA and rRNA are the most common and numerous types of RNA. It would be better to test a broader and more diverse range of RNA types.\\n7.\\tThere is a lack of research and discussion on deep learning methods suitable for the data in this problem. The paper only presents the prior-data fitted network for deep learning-based probability density estimation.\", \"questions\": \"Please refer to the Weaknesses section for details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Author clarifications continued\", \"comment\": \"### 3. What explains the strong performance of KinPFN on the real simulation data?\\n\\nWhile we are aware of work showing that ICL is a transient phenomena when training a transformer and that it competes with in-weight learning (IWL) (see e.g. [2]; https://arxiv.org/pdf/2311.08360), recent results show that this is not necessarily the case in the PFN setting.\\n\\nFor example, it was recently shown that a PFN trained only on step-functions can generalize to smooth predictions while fitting the prior well (see https://x.com/SamuelMullr/status/1841907948219727984 ; on X, yes, but still a nice result). \\n\\nGiven these insights, we can assume that KinPFN can generalize to distributions different from the prior, as indicated empirically with strong performance on the real simulation data, outperforming GMMs and ensembles of these.\\n\\nHowever, to further support this claim, we construct a simple synthetic example, using a multi-modal uniform distribution of the prior instead of the multi-modal Gaussian. We evaluate the ensemble of GMMs and KinPFN on 10,000 samples drawn from this (similar but clearly different) prior. The results are shown below.\\n\\n| Context Size | Model | MAE | NLL |\\n|----------------|-----------------------------|-----------------------------|----------------|\\n| 10 | KinPFN | **0.086** | **1.462** | \\n| | GMM Ensemble | 0.118 \\t| 7.085 |\\n| 25 | KinPFN | **0.062** | **1.314** | \\n| | GMM Ensemble | 0.106 \\t| 1.685 \\t|\\n| 50 | KinPFN | **0.051** | 1.281 | \\n| | GMM Ensemble \\t| 0.102 \\t| **1.033** \\t|\\n| 75 | KinPFN | **0.046** | 1.271 | \\n| | GMM Ensemble \\t| 0.099 \\t| **0.935** \\t|\\n| 100 | KinPFN \\t| **0.044** \\t| 1.266 \\t|\\n| | GMM Ensemble \\t| 0.098 \\t| **0.901** \\t|\\n\\nThese results support that KinPFN can generalize to a different distribution than defined in the prior.\\n\\nHowever, we agree with the Reviewer that a clear requirement to achieve this generalization is that the prior supports reasonable approximations of the posterior. In other words, if we cannot learn a good representation of the context that would also provide good approximations when using context from the posterior, the learned representations get exponentially worse during training (due to the multiplicative form of the likelihood). Our model would get more confident in a wrong representation, which would lead to worse performance when adding more and more context (larger N). However, this is not the case for KinPFN as we show empirically in Tables 8-10 where the performance of KinPFN constantly improves even with very large N. \\n\\nWe thus conclude that (1) KinPFN has learned good representations from the prior that support approximations of the posterior and (2) KinPFN can generalize to different distributions outside of the prior.\\n\\nIn summary, we think that the interpretation that KinPFN has learned good representations of the prior that enable generalization to the similar but different real simulation posterior, aligns much better with our empirical findings than the interpretation of the Reviewer that strong performance of KinPFN is a result of inaccurate fitting of the prior during training.\", \"this_could_even_be_formulated_as_a_contribution_since_our_results_allow_us_to_derive_principles_that_help_training_pfns_for_a_given_task\": \"We provide new evidence that PFNs and ICL can lead to generalization to new distributions, while also showing the necessity to carefully check the performance on similar but different distributions, particularly for larger Ns.\"}",
"{\"title\": \"Initial author response\", \"comment\": \"We thank all reviewers for their useful comments and valuable feedback. Specifically, we thank the reviewers for pointing out the novelty of our approach and its timeliness.\\n\\nWe will prepare individual responses for each review in the next few days.\\n\\nWe are looking forward to fruitful discussions and an interesting rebuttal period.\\n\\nWith kind regards,\\n\\nThe authors\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Comment\", \"comment\": \"Thanks for the response! My statement about the synthetic data was only to say that you've convinced me that my suggestion of training on synthetic data to build a prior was not a sound idea.\\n\\nWRT the comparison to GMMs I appreciate the authors including tables 7-9. As I understand however, KinPFN was trained to model a GMM prior, so why does it outperform them? Is it because the number of components is variable? If you built a GMM with a prior over the number of components identical to that of KinPFN then how would it perform.\"}",
"{\"metareview\": \"The paper introduces a novel ML approach which employs prior-data fitted networks to compute RNA first passage times. The approach can be combined with RNA kinetics simulators to achieve significant speedup and has the potential to enable analyses that could not be performed previously due to prohibitive runtimes.\\n\\nThe presented method is the first ML-based approach for the problem of RNA first passage times. The effectiveness of the method is convincingly demonstrated. The authors have done a great job at addressing the reviewers' points, including clearly articulating the novel methodological aspects of the proposed approach, providing additional results (the first passage time distribution approximations on two additional RNA types), comparison of KINPFN against ensemble of GMMs, and other clarifying points. \\n\\nOverall this is a great work that opens up exciting avenues for future ML research on the important task of approximating RNA first passage times.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised points regarding the need to clarify novel methodological aspects if applicable (concern share by Reviewers TUhM and DNAm), the focus on kinetics based on secondary structures (again a point raised by both reviewers TUhM and DNAm), providing additional discussion on the potential benefits of the approach, reporting additional results (KS test between CDFs, testing on additional RNA types requested by DNAm), comparison against GMM ensemble and other clarifying points requested by Reviewer FZT1.\\n\\nThe authors have addressed all these points in a very convincing way and have modified their manuscript accordingly. It is also remarkable that the authors were able to get hold of simulation data and provided approximation results for two additional RNA types during the rebuttal time.\"}",
"{\"title\": \"Response to Reviewer FZT1\", \"comment\": \"Dear Reviewer FZT1,\\n\\nThank you for your positive feedback and highlighting the soundness of our approach.\\n\\n>Why not train on real passage time data that you can simulate? Why synthetic?\\n\\nGenerating a sufficiently large dataset for training a deep learning approach is currently infeasible due to the exponential increase in computation time required by Kinfold (and other kinetic simulators) as the RNA sequence length grows linearly. This is caused by an exponential growth of the conformational space S_n with the sequence length n, roughly following S_n \\\\approx 1.86^n. Additionally, a dataset consisting of simulation data from a single simulator might be strongly biased, and the trained deep learning approach would potentially struggle to generalize across different simulators. However, our initial approach involved training a PFN using real first-passage time data that we generated by running Kinfold on a large set of very short sequences. However, training a PFN on real simulation data resulted in substantially worse performance compared to KinPFN, even when evaluating on data obtained from the same simulator.\\n\\nGenerally, the generation of synthetic data for biological applications is challenging because we cannot make any assumptions about the underlying data generating process. We, therefore, decided to approach the problem by directly learning the distribution of first passage times in an in-context learning setup that allows us to condition on previous observations (e.g. obtained from simulator data). Besides providing improved performance, this approach makes KinPFN broadly applicable to RNA folding kinetics independent of the sequence length, the simulator used for generating the context data, the RNA type, the energy differences between the start and stop structure, and even the specific problem addressed by the approximations (as shown by our generalization to gene expression data).\\n\\nWe think that training on synthetic data could solve many current issues connected to the usage of deep learning methods, particularly in the life sciences. These issues e.g. include overfitting due to limited amounts of available training data,unbalanced training data, as well as fairness problems. With KinPFN, we demonstrate that even with a relatively simple synthetic prior, we could generate synthetic data that represents a biological problem rather well, leading to strong approximation quality, avoiding the risk of overfitting, while learning from unlimited amounts of data that provides full control over data parametrizations. \\n\\n>The prior described by the synthetic data is so simple it may be easy to just run MCMC. Why use a language model at all?\\n\\nMonte Carlo methods are regularly used in RNA folding kinetics and we agree that it would be beneficial to exclusively train on MCMC-generated data. However, the runtime of MCMC depends on the chain length, and thus on the number of structural states. Therefore, running MCMC on long RNAs is computationally infeasible.\\n\\nSecondly, [1] recently showed that PFNs clearly outperform MCMC in terms of runtime and accuracy on the task of learning curve extrapolation, highlighting that generating sufficient samples with MCMC to reliably approximate the posterior may impose significant overhead (see Section 4 in [1]). \\n\\nWe, therefore, decide to base our approach on PFNs rather than MCMC, and to train on synthetic FPT distributions instead of simulation data.\\n\\n[1] Adriaensen, S., Rakotoarison, H., M\\u00fcller, S., & Hutter, F. (2024). Efficient bayesian learning curve extrapolation using prior-data fitted networks. Advances in Neural Information Processing Systems, 36.\\n\\n>Language models as priors can have some pathologies. How does the model behave as the amount of data becomes large? In figure 11 it looks like even with a lot of data the PFN doesn't converge to the true CDF.\\n\\nWe agree with the reviewer that the context can have a substantial impact on the resulting approximations. We tried to account for this by reporting means and standard deviations across 20 different contexts for all the approximations shown in the manuscript. Generally, we observe that the approximation performance is better with more context. In Figure 11 mentioned by the reviewer, the largest context size is 75, which arguably is still relatively small compared to the 1000 (or typically even more) simulations required to obtain reliable CDFs from Kinfold. However, also in Figure 11, we observe a clear trend of improved performance with more context data available, which is also a general pattern shown in Table 6 where we compare KinPFN with different alternative approaches. However, we agree with the reviewer that it might be interesting to see KinPFN\\u2019s behavior for much larger context sizes. We, therefore, add a Table with the results for context sizes of up to 1000 to the Appendix H.2 of our revised manuscript.\"}",
"{\"title\": \"Comment\", \"comment\": \"Thanks again for your quick detailed response!\\n\\nPreviously you showed evidence that KinPFN outperforms GMMs with fixed k. As I understood, you affirmed my hypothesis that KinPFN outperforms GMMs because it does full Bayesian inference over the number of components. Now you've done more experiments that show more data that KinPFN outperforms even a mixture. I thank you for performing these experiments but here is my issue: KinPFN is trained to approximate a mixture of GMMs, and presumably, if trained long enough with a large enough architecture, would perform identically to them; therefore, if KinPFN outperforms the mixture of GMMs, it is because it has failed to accurately fit the data. This is a little suspect to me as a sound modeling paradigm because 1) it's unclear how to turn this into principles for designing PFN architectures and training these models if you're actively trying to avoid fitting the training data, and 2) it is likely that this mis-fitting will manifest in pathologies in other regimens, say large N.\\n\\nClaims that KinPFN could be trained to fit other priors seem to have worked out more poorly that fitting the synthetic data -- you mentioned that the performance was worse when fit to simulation data. For this reason, I don't think this potential flexibility of the PFN should count as an advantage.\\n\\nOn the other hand, the Adriaensen paper attempted to fit a prior that was much harder to approximate via MCMC; in this case they suggested that a PFN might be a good alternative and showed that in their experiment in their Fig 3 -- they in particular showed that the failure of MCMC was that even with many many samples, the posterior could not be fit. As I understand, this paper makes a different argument: not that MCMC is hard for this data, but that KinPFN generalizes in a useful way when not trained to exactly match; this is a more suspect argument for the reasons outlined above.\\n\\nAnother point the authors made along the lines of the argument of the Adriaensen paper was that KinPFN is fast while MCMC is time consuming. STAN I believe can fit these models on the order of a minute and there are faster GPU-accelerated libraries as well. For this to could as a substantial contribution, I believe the authors should argue that reducing the fitting time of an RNA kinetics curve down from a minute is useful; given that the kinetics data is not super abundant, what is enabled by being able to analyze these data faster than a minute?\\n\\nAlso, excuse me for using the term \\\"LLM\\\". I accidentally used it as a metonym for a transformer.\\n\\nFinally, I appreciate the importance of interdisciplinary research and I also appreciate the authors working on solving problems in RNA kinetics. However, fitting simple, classical Bayesian models using packages like STAN is a staple of modern computational biology research; these models are regularly fit to microscopy, health, spectroscopy, and all sorts of other data. Therefore, I don't find approximately fitting a 4 component mixture of GMMs to one dimensional data to be a novel machine learning contribution.\"}"
]
} |
E1Tr7wTlIt | $\lambda$-SecAgg: Partial Vector Freezing for Lightweight Secure Aggregation in Federated Learning | [
"Siqing Zhang",
"Wei Sun",
"Yong Liao",
"Peng Yuan Zhou"
] | Secure aggregation of user update vectors (e.g. gradients) has become a critical issue in the field of federated learning. Many Secure Aggregation Protocols (SAPs) face exorbitant computation costs, severely constraining their applicability. Given the observation that a considerable portion of SAP's computation burden stems from processing each entry in the private vectors, we propose Partial Vector Freezing (PVF), a portable module for compressing computation costs without introducing additional communication overhead. $\lambda$-SecAgg, which integrates SAP with PVF, "freezes" a substantial portion of the private vector through specific transformations, requiring only $\frac{1}{\lambda}$ of the original vector to participate in SAP. Eventually, users can "thaw" the public sum of the "frozen entries" by the result of SAP. To avoid potential privacy leakage, we devise Disrupting Variables Element for PVF. We demonstrate that PVF can seamlessly integrate with various SAPs and it poses no threat to user privacy in the semi-honest and active adversary settings. We include $7$ baselines, encompassing $5$ distinct types of masking schemes, and explore the acceleration effects of PVF on these SAPs. Empirical investigations indicate that when $\lambda=100$, PVF yields up to $99.5\times$ speedup and up to $32.3\times$ communication reduction. | [
"Secure aggregation",
"Federated learning"
] | Reject | https://openreview.net/pdf?id=E1Tr7wTlIt | https://openreview.net/forum?id=E1Tr7wTlIt | ICLR.cc/2025/Conference | 2025 | {
"note_id": [
"wmQm643DnY",
"uLw9eNCndG",
"u8iMueVy6J",
"s1WlDhLNpK",
"r3lBZOFK1o",
"pTO5zHc2zS",
"olRmebhsCd",
"jZwh1Exxq8",
"iog03MH6Wy",
"hjz6ahLbnY",
"hTUfI19fKa",
"guDpIgS3rm",
"gtGgEJ9Rsf",
"gni4rJ92Cw",
"gejoF0T3hk",
"g4WDJeTCXQ",
"dTv4WR0oIw",
"d9RgcLFX4B",
"ctpZC77ttx",
"buMmS4KAXr",
"bsL0sUszG2",
"b0h5beqYwY",
"ax9cFzj8F5",
"armNq6FlKd",
"alKhfyCdXT",
"a6KiqTUF5v",
"a4BLoOVzMZ",
"Z5EpFhlMR8",
"YXHJQQ79LE",
"YDZ9cYbNU8",
"SoAK8rJe1s",
"SkpGgHMdZ3",
"PvZl7ZfAWD",
"MY27JLsnQM",
"Jwwwx6XRbM",
"JGqdXpl65q",
"IigryNHEtb",
"IH3lGEb2b7",
"GIZQ5jcvK6",
"DDKsYC5yJ9",
"CqU0EBSDsG",
"CidCP7eTrB",
"B9LG83SV80",
"9JvgAqxWAn",
"8KxtbULnL6",
"80xJW8kRpU",
"7Bq7WRqDhB",
"6HywBYgiIm",
"67PplfQZaJ",
"4mWzOdBzVs",
"3ukVFVgTlT",
"3SuACtcE8L",
"2jHhMmxfTP"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1732359242962,
1732019584438,
1732269181493,
1732373925363,
1730725684617,
1732540063095,
1732005674463,
1732630323690,
1732123680773,
1737523617501,
1733217364776,
1733219832367,
1733193245194,
1732615279469,
1732545706750,
1732543281097,
1732348724911,
1732006226719,
1732005761237,
1732005832702,
1732005726259,
1732896304966,
1732184304962,
1730688573491,
1732632985015,
1730641519873,
1733225408935,
1732882637155,
1732908046565,
1732558676539,
1732348292022,
1732552114506,
1732005193296,
1730366206682,
1732799883653,
1732799572577,
1732011415374,
1732005113781,
1733226590450,
1730714404887,
1732703553842,
1732630459506,
1732019739578,
1732703790561,
1733067240116,
1732882808116,
1733376962084,
1732363663553,
1732010317172,
1730627105820,
1732348136446,
1732898826067,
1732548935260
],
"note_signatures": [
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_SAYu"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_zANE"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Program_Chairs"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_zANE"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_z5KJ"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_a2uE"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_SAYu"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_zANE"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_r5Dg"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_zANE"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Area_Chair_g6rG"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_SAYu"
],
[
"ICLR.cc/2025/Conference/Submission4067/Reviewer_9mLH"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
],
[
"ICLR.cc/2025/Conference/Submission4067/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"I would like to differ with the authors in this comment. I have never acknowledged that the proposed method preserves privacy at any point of my discussion and review.\"}",
"{\"comment\": \"Given the private vector $\\\\mathbf{x} = (x_1, x_2, x_3) = (7, 0, 0)$, it is indeed evident that all elements of the vector $\\\\mathbf{y}$ are multiples of 7. However, from the server's perspective (the server obviously does not know $\\\\mathbf{x}$), it only has access to $\\\\mathbf{y}$, which is expressed as:\\n$$\\n\\\\begin{array}{lll}\\n6x_1+9x_2+5x_3=42, \\\\\\\\\\\\\\\\\\n8x_1+4x_2+x_3=56,\\n\\\\end{array}\\n$$\\nFor instance, the server cannot distinguish whether $\\\\mathbf{x}$ is $(7, 0, 0)$ or $(18, -34, 48)$. Additionally, we have revised H1, please refer to the latest version of the PDF.\"}",
"{\"comment\": \"Thank you for your answers.\"}",
"{\"title\": \"General response 3: Privacy Protection Overview\", \"comment\": \"Dear reviewers, PCs and ACs:\\n\\nThe primary concerns raised by @Reviewer SAYu, @Reviewer zANE, and @Reviewer r5Dg revolve around the potential leakage of partial information about $\\\\mathbf{x}^i$ through $\\\\mathbf{y}^i$, leading to privacy compromise. Below, we briefly outline the privacy protection of our approach to substantiate the claim that \\\"*PVF effectively preserves user privacy*\\\" in General Response 2:\\n### **1. Basic version: privacy in the Main PVF method (Sec. 3.2)**\\nThe privacy of $\\\\mathbf{x}^i$ in the Main PVF is primarily safeguarded by **the hardness of determining a specific solution to an under-determined system of linear equations**. For instance, given the private vector $\\\\mathbf{x} = (x_1, x_2, x_3) = (7, 0, 0)$ and the matrix $\\\\mathbf{A}$ in General Response 1:\\n$$\\n \\\\mathbf{A}=\\n\\\\begin{pmatrix}\\n6 & 9 & 5 \\\\\\\\\\\\\\\\\\n8 & 4 & 1 \\\\\\\\\\\\\\\\\\n5 & 7 & 5 \\\\\\\\\\\\\\\\\\n\\\\end{pmatrix},\\n$$\\nFrom the server's perspective, it only has access to $\\\\mathbf{y}$, which is expressed as:\\n$$\\n\\\\begin{array}{lll}\\n6x_1+9x_2+5x_3=42, \\\\\\\\\\\\\\\\\\n8x_1+4x_2+x_3=56,\\n\\\\end{array}\\n$$\\nThe server cannot distinguish whether $\\\\mathbf{x}$ is $(7, 0, 0)$ or $(18, -34, 48)$ or any other possible solution. While the server cannot obtain any individual element of $\\\\mathbf{x}$, it still obtains certain linear relationships involving the private elements, as pointed out by the reviewers. For this reason, we introduced Disrupting Variables Extension (DVE) in Sec. 3.3 to provide enhanced privacy.\\n### **2. Enhanced version: Disrupting Variables Extension (Sec. 3.3)**\\nDVE ensures that **the server cannot obtain any information about $\\\\mathbf{x}^i$ from $\\\\mathbf{y}^i$**, relying on **the hardness of the Learning With Errors (LWE) decision problem**. As detailed in Appendix D.3 (Lemma 3: The hardness of the Learning With Errors decision problem):\\n\\n*Given a finite field $\\\\mathbb{F}_p$ and a discrete probability distribution $\\\\mathcal{X}$ over $\\\\mathbb{F}_p$. Let $\\\\mathbf{s} \\\\in \\\\mathbb{F}_p^n$ be a secret vector, $\\\\mathbf{A} \\\\in \\\\mathbb{F}_p^{m \\\\times n}$ be a matrix that is chosen uniformly at random and $\\\\mathbf{e} \\\\in \\\\mathbb{F}_p^m$ be the error vector that is sampled from $\\\\mathcal{X}$. The Learning With Errors (LWE) (search) problem is to find $\\\\mathbf{s}$, given the pair $(\\\\mathbf{A}, \\\\mathbf{b})$, where $\\\\mathbf{b} = \\\\mathbf{A} \\\\mathbf{s} + \\\\mathbf{e}$. And the LWE decision problem is to distinguish between two uniformly randomly generated pairs. When the size of $p$ is polynomial in $n$, the LWE decision problem is at least as hard as the LWE search problem.*\\n\\n$\\\\mathbf{x}^i$ is added with noise $\\\\mathbf{e}$ through Eq. 10 and $\\\\mathbf{y}=\\\\check{\\\\mathbf{A}}(\\\\mathbf{x}+ \\\\mathbf{e})=\\\\check{\\\\mathbf{A}}\\\\mathbf{x}+ \\\\mathbf{e'}$ ($\\\\check{\\\\mathbf{A}}$ is public). Therefore, given a uniformly random vector $\\\\mathbf{w}^i$, Lemma 3 ensures that $(\\\\check{\\\\mathbf{A}}, \\\\mathbf{y}^i)$ and $(\\\\check{\\\\mathbf{A}}, \\\\mathbf{w}^i)$ are indistinguishable, which guarantees $\\\\mathcal{S}$ does not obtain private information from honest users through frozen vectors.\\n\\n### **3. Overall security analysis**\\nFinally, we performed a security analysis of the secure aggregation protocol integrated with PVF under the Universal Composability (UC) framework, i.e., the proof of Theorem 1.\\n\\nWe sincerely invite you to review our responses and hope they resonate with your insights. Your dedication and expertise in evaluating our work have been invaluable in enhancing its quality.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"summary\": \"This paper introduces $\\\\lambda$-SecAgg, a secure aggregation protocol for federated learning (FL) designed to reduce computational and communication overhead through Partial Vector Freezing (PVF). This paper claims that by freezing and processing only a fraction of the private vector entries, the method significantly reduces the burden on the server and participating devices while ensuring all vector entries are eventually aggregated. To further enhance privacy, the paper incorporates Disrupting Variables Extension (DVE). The authors empirically demonstrate substantial performance gains in terms of speedup and communication reduction across various secure aggregation protocols.\\n\\nWhile the paper presents an interesting method to reduce the overhead in secure aggregation, the privacy analysis in Section 4.1 is fundamentally flawed. The authors underestimate the information leakage from $y^{i}$, which compromises the claimed privacy guarantees.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This paper considers a timely and important problem in secure aggregation protocol to reduce computational and communication overhead.\", \"weaknesses\": \"1. Most importantly, the privacy analysis in Section 4.1, which claims no privacy leakage from $y^{i}$, is flawed. Although the paper asserts that no specific element of the original vector $x$ can be deduced directly from $y^{i}$, this does not mean there is no privacy leakage. In fact, $y^{i}$ reveals significant information about $x$. For example, in the case where $\\\\lambda = 2$ and $x$ has two elements, the server can infer $x_1$ in terms of $x_2$ from $y^{i} = a_{11}x_1 + a_{12}x_2$. While $x_1$ cannot be fully determined without $x_2$, the conditional probability of guessing $x_1$ correctly is now $1/p$ instead of $1/{p^2}$. This reduction in entropy, $H(x)$, shows that $y^{i}$ contains valuable information, thus reducing privacy. The authors should revise the privacy analysis and clarify the impact of knowing $y^{i}$ on the security of the original vectors.\", \"questions\": \"Please see the comment in the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Authors,\\n\\nThank you for providing clarifications. After carefully reviewing your responses regarding the privacy guarantees, as well as the updated manuscript, I remain unconvinced about the robustness of your protocol's privacy guarantees and the soundness of the presented security proof.\\n\\nFor example, while Theorem 1 still claims to address malicious adversaries, the proof does not adequately handle these scenarios. Similarly, replacing all plaintext inputs with dummy messages in H1 likely changes the output distribution compared to H0. As a result, the paper fails to demonstrate a satisfactory trade-off between performance and privacy in its current state, and significant changes are required to ensure the soundness of the proof.\"}",
"{\"comment\": \"We would like to appreciate you for your constructive feedback. We address your questions and concerns in the following.\\n\\n1. Privacy issues of two identical input. After introducing DVE, Eq. 8 ensures that when the noise of $\\\\mathcal{N}(0,\\\\sigma^2)$ is only added to $\\\\mathbf{x}$, the noise of $\\\\mathcal{N}(0,(1+\\\\frac{l}{\\\\lambda})\\\\sigma^2)$ is added to $\\\\mathbf{y}$. For example:\\n $$A_{1,1} (x_{(j-1) \\\\lambda + 1} + \\\\underline{k_1 +\\\\cdots+ k_{\\\\left \\\\lfloor \\\\frac{l}{\\\\lambda} \\\\right \\\\rfloor }})+\\\\cdots+A_{1, \\\\lambda} (x_{j\\\\lambda} + \\\\underline{k_{l- \\\\left \\\\lfloor \\\\frac{l}{\\\\lambda} \\\\right \\\\rfloor} + \\\\cdots + k_l}) =y_{(j-1) \\\\lambda + 1}.$$\\n The additional noise does not affect the recovery of $\\\\mathbf{x}$, as it is eliminated during the **thawing** process, i.e.,\\n $$A_{1,1} (\\\\sum x_{(j-1) \\\\lambda + 1} )+\\\\cdots+A_{1, \\\\lambda} (\\\\sum x_{j\\\\lambda} )=\\\\sum y_{(j-1) \\\\lambda + 1} - \\\\underline{ \\\\sum \\\\sum_{o\\\\in [1, \\\\lambda]} A_{1,o}\\\\sum_{r\\\\in \\\\left[(o-1)\\\\left \\\\lfloor \\\\frac{l}{\\\\lambda} \\\\right \\\\rfloor+1,o\\\\left \\\\lfloor \\\\frac{l}{\\\\lambda} \\\\right \\\\rfloor \\\\right]} k_{r} }.$$\\n Moreover, as pointed out in PracAgg[1] on page 12 of their paper: \\\"*...almost all of the computation cost comes from expanding the various PRG seeds to mask the data vector. Compared to this, the computational costs of key agreement, secret sharing and reconstruction, and encrypting and decrypting messages between clients, are essentially negligible.*\\\" Therefore, the efficiency gains brought by PVF are of considerable importance.\\n2. The indistinguishability between $H_0$ and $H_1$. Here, we no longer prove the randomness of $y^i$. we modify \\\"*we replace the frozen vectors $\\\\{\\\\mathbf{y}^{i}\\\\} _ {i\\\\in\\\\mathcal{U}}$ received by $\\\\mathcal{S}$ with uniformly random vectors.*\\\" in H1 of Theorem 1 to \\\"*replace $\\\\{\\\\mathbf{x}^{i}\\\\} _ {i\\\\in\\\\mathcal{U}}$ with random vectors that maintain the same correlation between $\\\\{\\\\mathbf{x}^{i}\\\\} _ {i\\\\in\\\\mathcal{U}}$ and $\\\\{\\\\mathbf{y}^{i}\\\\} _ {i\\\\in\\\\mathcal{U}}$*\\\". This adjustment ensures the continued validity of Theorem 1. Please see the modification of the proof Theorem 1 in the document. We proceed to demonstrate that the original vector remains random even with the knowledge of the correlation. Let $\\\\mathbf{x}$ denote a fragment of an original vector. We denote the **general solution** obtained by the adversary as $\\\\mathbf{x'}=\\\\rho\\\\mathbf{a}+\\\\mathbf{b}$, where $\\\\mathbf{a}$ and $\\\\mathbf{b}$ are solved given $\\\\mathbf{y}^i$, $\\\\rho$ is an uncertain variable. The correct $\\\\rho$ corresponds to $\\\\mathbf{x}$ is $ \\\\rho _ * $, i.e., $\\\\mathbf{x} = \\\\rho _ * \\\\mathbf{a}+\\\\mathbf{b}$. It can be seen that $\\\\mathbf{x'}$ differs from $\\\\mathbf{x}$ in both magnitude and direction. That means upon acquiring the correlation, the server essentially gains that $\\\\mathbf{x}$ is still **randomly distributed with its initial point and terminal point on two parallel lines respectively** (as visually apparent from Fig. 9 in the new PDF), which does not compromise privacy.\\n3. The call to the ideal function. We expressed it correctly in the main text (line 309). Thank you for pointing out the typo in the appendix.\\n4. **Please note that the server's computation time in PracAgg is not 5 seconds, but rather 50 seconds**. Authors of PracAgg do not provide the source code, but we have strictly followed the implementation outlined in their proposal, as detailed in Appendix E.2 in our paper. We speculate that this discrepancy might be due to differences in machine configurations. The results in Ref[2][3] are similar to those in our paper, with the client computation time being around 10 seconds and the server overhead approximately 100 seconds.\\n\\n## Reference\\n[1] Bonawitz, Keith, et al. \\\"Practical secure aggregation for privacy-preserving machine learning.\\\" proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.\\n\\n[2] Hahn, Changhee, et al. \\\"VerSA: Verifiable Secure Aggregation for Cross-Device Federated Learning.\\\" IEEE Transactions on Dependable and Secure Computing 20.1 (2023): 36-52.\\n\\n[3] Liu, Ziyao, et al. \\\"Efficient dropout-resilient aggregation for privacy-preserving machine learning.\\\" IEEE Transactions on Information Forensics and Security 18 (2022): 1839-1854.\"}",
"{\"title\": \"Response part I\", \"comment\": \"Dear Reviewer zANE:\\n### **R1. The Relationship between PVF and SAP**\\nFrom your comments, we guess you may not have fully comprehended the calculation process of PVF. Therefore, we will first provide you with a detailed explanation of the relationship between PVF and the integrated Secure Aggregation Protocol (SAP).\\n\\nFor example, given $\\\\mathbf{x}^i = (x_1, x_2, x_3)$, the user computes $\\\\mathbf{y}^i = \\\\check{\\\\mathbf{A}}(\\\\mathbf{x}^i + \\\\mathbf{e}) = (y_1, y_2)$. The user only inputs $k^i =\\\\alpha(x_3 + e_3)$ into SAP while simultaneously transmitting $\\\\mathbf{y}^i$ as a piggyback message to the server during the protocol. Due to the hardness of LWE search and decision problem, $\\\\mathbf{y}^i$ does not reveal any private information about $\\\\mathbf{x}^i$. \\n\\nUpon completing the SAP, the user obtains $\\\\sum \\\\mathbf{y}^i$ and $\\\\sum k^i$. By leveraging Eq. (9), $\\\\sum \\\\mathbf{x}$ can be reconstructed. And we refer to the SAP integrated with PVF as $\\\\lambda$-SecAgg. Clearly, throughout this process, PVF does not modify the operations of SAP and remains decoupled from it. As long as the SAP can securely aggregate $\\\\sum k^i$, it can be seamlessly integrated with PVF.\\n\\n### **R2. How Cryptographic Primitives are Applied in PVF**\\nThe utilization of these cryptographic primitives is illustrated in Fig. 13. Here, based on the description of PVF in R1, we provide a detailed explanation of the active attacks that malicious participants can launch **within PVF** and how PVF leverages these cryptographic primitives to defend against them. \\n\\nFirst and foremost, it is evident that **in PVF**, the user **only transmits** $\\\\mathbf{y}^i$ to the server, and the server **only sends** $\\\\sum \\\\mathbf{y}^i$ back to the users after SAP ends. Notably, $k^i$ and $\\\\sum k^i$ are transmitted through the SAP, independent of PVF.\\n\\n1) **Forging Fake Users to Participate in PVF.** This type of attack, also known as a *Sybil Attack*, involves fake users reporting received information to the server. Such attacks primarily target scenarios where users share secret keys among themselves but keep the keys secret from the server, like PPDL [1]. Alternatively, an attacker may attempt to forge a large number of fake users (more than $\\\\frac{1}{3}|\\\\mathcal{U}|$) to reconstruct users\\u2019 private keys in the secret-sharing scheme. Since PVF does not involve information that is kept secret from the server but shared among all users, and consistent with the assumption in [2] that the number of malicious users does not exceed $\\\\frac{1}{3}|\\\\mathcal{U}|$ (line 147), PVF is resistant to this type of attack.\\n\\n2) **Attempting to Forge or Tamper with Honest Users' Messages.** Such attacks may occur in PVF in the following situations: malicious participants forging or tampering with an honest user's $\\\\mathbf{y}^i$. This can be avoided by the digital signature $\\\\sigma_{1}^{i}$ employed in PVF. Similarly, malicious participants may attempt to forge or tamper with $\\\\sum \\\\mathbf{y}^i$ sent by the server, which is prevented by the use of $\\\\sigma_{3}$. These protections are reflected in $H_2$ of the proof.\\n\\n3) **Sending Malformed Messages.** In PVF, such attacks include malicious users sending malformed ciphertexts of $\\\\mathbf{y}^i$ or the malicious server sending malformed ciphertexts of $\\\\sum \\\\mathbf{y}^i$. Such attacks are prevented by the IND-CPA and IND-CTXT security of the symmetric authenticated encryption used in PVF. If decryption fails, the protocol is immediately terminated. These protections are reflected in $H_1$ of the proof.\\n\\n4) **Intercepting and Stealing Private Information.** Malicious adversaries may intercept messages sent by honest users to extract private information. This is effectively avoided by the symmetric authenticated encryption employed in PVF. This protection is reflected in $H_1$ of the proof.\\n\\nThe use of symmetric authenticated encryption and digital signatures to ensure privacy under the active adversary model is a relatively mature application in the field of secure aggregation, and our design follows these existing works. In the proof, under the UC framework, $H_1$ and $H_2$ formally explain the impossibility of active adversaries forging or tampering with messages **in PVF**, as well as the fact that any attack by an active adversary would lead to the termination of the protocol, thus demonstrating the security of PVF under the active adversary model. \\n\\nIf you believe there are specific steps missing, please do not hesitate to offer your guidance.\"}",
"{\"comment\": \"Thanks for your response.\\n\\n - Just to understand a bit more the revised version, could you clearly explain what did you modify in $H_1$ and why did you make that change? \\n\\nI understand your comment. However, proving security requires to prove something much stronger than just informally showing that the adversary cannot immediately get the private values as you argue. \\n\\nFollowing standard methodology on security (e.g., such as sequentially composable security [R1] or universal composability [R2]), for a multiparty protocol (in this case, secure aggregation) to be secure, it is required that the view of the execution does not reveal more information than the ideal functionality (in this case a functionality that only reveals the sum $\\\\sum_{i \\\\in \\\\mathcal{U} \\\\setminus \\\\mathcal{C}} {\\\\mathbf{x}^i}$ of the private vectors of the honest parties). \\n\\nThis allows that protocols obtain *well studied* properties such as not having unexpected leakages when executed multiple times as in FL frameworks or in parallel as in higher level multiparty computations. This is why it is required that a simulator in the ideal world can emulate the protocol in the real world **without other information about private values other than its sum (i.e., the output of the ideal functionality)**.\\n\\nOn the other hand, the protocol you propose reveals by construction the majority of the linear relations between the elements of private vectors. This narrows the space of search of the adversary exponentially in the dimension of the private vectors, plus the information about $\\\\sum_{i \\\\in \\\\mathcal{U} \\\\setminus \\\\mathcal{C}} {\\\\mathbf{x}^i}$. This is overwhelmingly more information than aggregation protocols that are proven secure under standard notions (e.g., [R3, R4]). Given the amount of information that your protocol reveals, it seems to me that it is indeed insecure and therefore it not fair to be compare it with other secure protocols.\\n\\nWhile the above problems already hold for semi-honest adversaries that follow the protocol, you claim to prove security for a malicious adversary. However, your proof in Theorem 1 does not contemplate common steps required for such a proof (e.g., what happens in the simulation if the adversary sends corrupted messages). \\n\\n- Could you specify more clearly what does Theorem 1 ensures and in which security framework the proof relies? \\n\\n[R1] Oded Goldreich. 2009. Foundations of cryptography: volume 2, basic applications.\\nCambridge university press, Cambridge, England.\\n\\n[R2] Canetti, Ran. \\\"Universally composable security: A new paradigm for cryptographic protocols.\\\" Proceedings 42nd IEEE Symposium on Foundations of Computer Science. IEEE, 2001.\\n\\n[R3] Bell, James Henry, et al. \\\"Secure single-server aggregation with (poly) logarithmic overhead.\\\" Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. 2020.\\n\\n[R4] Bonawitz, Keith, et al. \\\"Practical secure aggregation for privacy-preserving machine learning.\\\" proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.\"}",
"{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}",
"{\"comment\": \"Dear authors,\\n\\nThank you for your detailed reply and efforts in modifying the manuscript. \\n\\nI am however still not convinced that the distribution of observations in your protocol is the same as the distribution required in Theorem 3.1 of (Regev. et al.). \\n\\n1- For $\\\\mathbf{e}'$ to have a Gaussian distribution, you have to fix $A$ in advance. However, in (Regev. et al.) observations of the form $(\\\\mathbf{a}, \\\\mathbf{a}^\\\\top \\\\mathbf{x} + e_i)$ (as described in my previous e-mail) require that $\\\\mathbf{a}$ follows a random distribution, which is not the case in your protocol as now $A$ is fixed. \\n\\n2- Your last modification of the protocol seems to make $\\\\mathbf{e}'$ independent, but now $\\\\mathbf{e}$ is correlated.This effect needs to be proven secure in the overall revealed information: In addition to $\\\\check{A}\\\\mathbf{x}$ that each user reveals, you will reveal the aggregation of all private values with a correlated noise term. This might not be innocuous. Please try to provide a step-by-step proof. \\n\\nGiven that we are at the end of the discussion period, I feel that it is hard for reviewers to assess the changes that you already added and I still think that substantial changes on other problems that were mentioned in the discussion need to be made to the protocol, including a proper security proof. A friendly guide to do that is provided in [1]. \\n\\nI will keep my score because the current manuscript requires a large amount of work for its acceptance, but I encourage authors to keep investigating the security of this idea. \\n\\n[1] Lindell, Yehuda. \\\"How to simulate it\\u2013a tutorial on the simulation proof technique.\\\" Tutorials on the Foundations of Cryptography: Dedicated to Oded Goldreich (2017): 277-346.\"}",
"{\"title\": \"Response (Id GR2-16)\", \"comment\": \"Dear Reviewer r5Dg:\\n\\n### **R1: Public $\\\\mathbf{A}$ Does Not Compromise Security**\\nIn our scheme, $\\\\mathbf{A}$ is public, **does not** require manual construction, and is randomly generated. Familiarity with public key cryptosystems might aid in understanding this concept. You may refer to *Public Key Cryptosystem* of [1] on page 35, where $\\\\mathbf{a}_i$ is **also** used as the *Public Key*. Similarly, in their scheme, $\\\\mathbf{a}_i$ is also \\\"**fixed in advance**.\\\"\\n\\n### **R2: Privacy of Aggregation Results**\\nFirstly, we have demonstrated that $\\\\mathbf{\\\\check{A}x}$ does not reveal any private information. \\n\\nSecondly, even without adding any noise, as shown in [2], $\\\\sum \\\\mathbf{x}^i$ **does not** compromise the privacy of any individual $\\\\mathbf{x}^i$, which is a foundational principle and consensus in **secure multi-party computation**. Consequently, adding noise (which itself is an aggregated value) does not affect the privacy of the values participating in the aggregation.\\n\\nIn our discussions, we have thoroughly demonstrated the security of our scheme and addressed all the concerns you previously raised. Could you please **specify** why you believe \\\"the distribution of observations in your protocol is the same as the distribution required in Theorem 3.1 of [1]\\\"? \\n\\nAdditionally, the current version of the PDF already includes an almost complete security proof, and we only need to incorporate the content of Response Id GR2-14 into the PDF. This discussion has taken place under the General Response, with other reviewers observing the entire process. **We do not believe that making minor adjustments at the end of the discussion period should serve as grounds for rejecting our work**. If we have adequately resolved your concerns, we earnestly hope that you will consider raising your score.\\n\\n### Reference\\n[1] Regev, Oded. \\\"On lattices, learning with errors, random linear codes, and cryptography.\\\" Journal of the ACM (JACM) 56.6 (2009): 1-40.\\n\\n[2] Bonawitz, Keith, et al. \\\"Practical secure aggregation for privacy-preserving machine learning.\\\" proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.\"}",
"{\"title\": \"General response 4: Looking Forward to Your Reply\", \"comment\": \"Dear reviewers, PCs and ACs:\\n\\nWe are immensely grateful to the reviewers for their valuable feedback, as well as to the PCs and ACs for their coordination efforts. The discussions over the past three weeks have yielded numerous insightful suggestions, which have been instrumental in improving the quality of our work.\\nFollowing several discussions with multiple reviewers, we have now responded to all the questions and concerns raised. With **fewer than 12 hours** remaining until the conclusion of the discussion period, we sincerely invite you to participate in the dialogue. If you have any further questions or new insights, your input would be greatly valued. **If we have already addressed your concerns to your satisfaction, we kindly request that you consider raising our score.**\\n\\n**In General Response 3**, we demonstrated that PVF effectively protect user privacy. Furthermore, through extensive discussions with Reviewer r5Dg (to whom we express our sincere respect and gratitude), we have thoroughly examined and provided detailed explanations regarding privacy protection (please refer to **Responses GR2-6 to GR2-14**). Since PVF significantly reduces the overhead of secure aggregation, we are confident that our work will contribute greatly to both community research and practical engineering applications.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"comment\": \"To be clear, this answer does not address my concerns about security of your protocol.\\n\\nYour proof still does not account for actively malicious adversaries. While the primitives you reference (e.g., symmetric authenticated encryption and digital signatures) are known to enhance security, your proof does not explicitly demonstrate how these are applied within your protocol to address malicious behavior. I recommend referring to theory on simulation-based proofs or established approaches in related work to illustrate how to formally account for such adversaries.\\n\\nSimilarly, your statement that \\\"the security of the remaining steps [...] is inherently ensured by the integrated SAP itself\\\" is not substantiated in your security proof. Typically, such a claim would require invoking the simulator or ideal functionality of the integrated SAP explicitly. Without this, the connection between the SAP and your protocol's security remains unclear.\"}",
"{\"comment\": \"Dear authors,\\n\\nIn your response you haven't addressed the main concerns of my last comment: (i.e., https://openreview.net/forum?id=E1Tr7wTlIt¬eId=iog03MH6Wy)\", \"therein\": [\"I explain that satisfying standard security (either universal or sequential composability referenced therein) requires that your protocol don't reveal more information than just the aggregation of the private vectors of the honest parties\", \"I express my concern in the fact that your protocol reveals overwhelmingly more information than just the sum (i.e. the space of possible solutions of the linear system shrinks exponentially in the number of dimensions of private vectors compared with existing secure protocols).\", \"Therefore I conclude that your protocol does not satisfy standard security.\"]}",
"{\"comment\": \"Dear Reviewer zANE:\\n\\nThank you for your comment. We hope the following response can help clarify your concerns. \\n\\n### **1. Privacy in the Active Adversary Model**\\nIn the active adversary model, malicious participants may:\\n\\n* send malformed or incorrect messages to disrupt the calculations of honest parties.\\n* forge fake users to engage in the protocol.\\n* attempt to forge or tamper with the messages of other parties.\\n* attempt to send a fabricated special message.\\n\\nAll of these attacks can be avoided by the use of symmetric authenticated encryption and digital signatures (as in $H_1$ and $H_2$ of our proof). This security assurance is also evident in the security analysis of other aggregation protocols, such as the proof of Theorem IV.4 in [1] and the proof of Theorem A.2 in [2].\\n\\n### **2. Indistinguishability between $H_1$ and $H_0$**\\nIn fact, replacing all plaintext inputs with dummy messages in $H_1$ **does not** change the distribution compared to $H_0$. This is because the symmetric authenticated encryption we employ satisfies both IND-CPA (Indistinguishability under Chosen Plaintext Attack) and INT-CTXT (Integrity of Ciphertexts), as stated in **line 878** of our paper. Let's briefly introduce these two security requirements:\\n\\n* IND-CPA ensures that an encryption scheme maintains indistinguishability under chosen plaintext attacks[3]. It means that even if an attacker can select arbitrary plaintexts and observe their corresponding ciphertexts, they cannot deduce any information related to the plaintext from the ciphertext. In short, the attacker is unable to distinguish between ciphertexts with any significant advantage over random guessing, regardless of the plaintext encrypted.\\n\\n* INT-CTXT guarantees that an attacker, without knowledge of the plaintext or the key, cannot generate a valid ciphertext that, when decrypted, results in a legitimate plaintext[3].\\n\\nThus, after replacing the plaintext in $H_0$ with dummy messages, SIM is unable to distinguish the ciphertexts after encryption. This point is widely acknowledged in the security aggregation literature, as evidenced by the proof of Theorem 6.3 in [2], specifically $Hyb_2$, as well as $Hyb_4$ and $Hyb_5$ of the proof of Theorem A.2 in [2].\\n\\nIf you have any other questions, please let us know.\\n\\n### Reference\\n[1] Liu, Ziyao, et al. \\\"Efficient dropout-resilient aggregation for privacy-preserving machine learning.\\\" IEEE Transactions on Information Forensics and Security 18 (2022): 1839-1854.\\n\\n[2] Bonawitz, Keith, et al. \\\"Practical secure aggregation for privacy-preserving machine learning.\\\" proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.\\n\\n[3]Bellare, Mihir, and Chanathip Namprempre. \\\"Authenticated encryption: Relations among notions and analysis of the generic composition paradigm.\\\" International Conference on the Theory and Application of Cryptology and Information Security. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000.\"}",
"{\"title\": \"General Response 2: Significant Progress made during Discussions\", \"comment\": \"Dear reviewers, PCs and ACs:\\n\\nWe sincerely thank the reviewers for their constructive feedback. After multiple discussions with several reviewers, we summarize the significant progress made regarding our submission as follows:\\n\\n1. **Privacy** (@Reviewer SAYu, @Reviewer zANE, @Reviewer r5Dg): \\n In Appendix D.3, we have refined the proof of Theorem 1, demonstrating that **PVF effectively preserves user privacy**. This point has been **acknowledged by Reviewer r5Dg during the discussion**. We provide the theoretical support indicating that PVF does not leak private elements or their relationships, grounded in the hardness of the Learning With Errors (LWE) decision problem. And we present a detailed security analysis of PVF relying on the Universal Composability (UC) framework. Please refer to page 18 of the latest PDF.\\n\\n2. **Clarity** (@Reviewer z5KJ, @Reviewer a2uE): \\n In General Response 1, we included a concise example to provide a clearer explanation of PVF, which has **aided Reviewer SAYu and Reviewer r5Dg in better understanding PVF**.\\n\\nIf our responses have addressed your concerns, we kindly ask you to consider **raising your scores**. Should any questions remain unresolved or if we have overlooked any of your points, please do not hesitate to raise them.\\n\\nThank you very much.\\n\\nBest regards,\\n\\nAuthors.\"}",
"{\"comment\": \"## Response to the Issues of Privacy\\nThe statement \\\"*In this case, values of $\\\\check{\\\\mathbf{A}}\\\\mathbf{x}$ will always be multiples of $x_1$*\\\" has caused me considerable confusion. **How can a vector be a multiple of a scalar?** Using the parameters from General Response 1 as an example, suppose the user vector is $\\\\mathbf{x}=(7,0,0)$ and $\\\\mathbf{y}=\\\\check{\\\\mathbf{A}}\\\\mathbf{x}=(42,56)$. Could you please provide further clarification using this example? Our approach differs from the compression-based method you mentioned. **Their focus is orthogonal to ours**, and we have already demonstrated in our paper that our compression is lossless (if without DVE).\\n## Response to Detailed Comments\\n1. The mask-related approaches indeed belong to SMPC. However, due to the broad implications of mask-related approaches, many works [1][2] classify mask-based solutions separately. \\\"*(i) improving the masking mechanism*\\\" refers to the introduction of novel masking mechanisms as proposed in [2][3]. \\\"*The security of FL*\\\" pertains to safeguarding the privacy of user inputs.\\n2. Please refer to **Input correctness** in [4] on page 3: \\\"*...providing strong guarantees against malicious inputs remains an open problem. Some works use zero-knowledge proofs to bound how much a client can bias the final result, but they are unable to formally prove the absence of all possible attacks.*\\\" Similarly, establishing strong constraints against malicious inputs remains an unresolved challenge, and **it falls beyond the scope of our work**. For the end-to-end comparison, please refer to Fig. 2. The local training steps required by ML are independent of the overhead of secure aggregation, and the overhead of the local training steps is **not the focus of this paper**.\\n3. $AK$ is addressed in Lemma 1 and Lemma 2 of Appendix D.2 (Eq. 21 and Eq. 22). $\\\\mathbf{A}$ undergoes certain pre-checks (see Sec. 4.1). Since $\\\\mathbf{A}$ is public and agreed upon by all clients, any malicious construction would require collusion among all users and can be detected by the honest users. Clearly, an overly simplistic $\\\\mathbf{A}$ cannot be used.\\n4. \\\"... even if some secagg entries are compromised\\\": We are considering extreme cases here. \\\"complicating the relationships among entries and further enhancing privacy\\\": We have not employed obfuscation techniques. Our objective is to ensure that $\\\\mathbf{y}_j$ is not solely related to the privacy vector $\\\\mathbf{x}_j$ of the current $j$-th group, but also to elements from other groups (as shown in Eq. 8). This is to prevent the leakage of certain elements within the current group from undermining the security of other elements. \\n5. Note that PVF does not attempt to alter SAP or eliminate the dependency of the dimension of a vector, it reduces the number of entries processed in SAPs while ensuring intact aggregation of all entries in the original vector. Furthermore, the experimental results of PVF are highly significant.\\n6. The proportion of SecAgg overhead in the computational cost of a local training step for a client is **unrelated** to the goal of reducing SAP computational overhead in this paper.\\n## Reference\\n[1] Liu, Ziyao, et al. \\\"Privacy-preserving aggregation in federated learning: A survey.\\\" IEEE Transactions on Big Data (2022).\\n\\n[2] Liu, Ziyao, et al. \\\"Efficient dropout-resilient aggregation for privacy-preserving machine learning.\\\" IEEE Transactions on Information Forensics and Security 18 (2022): 1839-1854.\\n\\n[3] Stevens, Timothy, et al. \\\"Efficient differentially private secure aggregation for federated learning via hardness of learning with errors.\\\" 31st USENIX Security Symposium (USENIX Security 22). 2022.\\n\\n[4] Ma, Yiping, et al. \\\"Flamingo: Multi-round single-server secure aggregation with applications to private federated learning.\\\" 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023.\"}",
"{\"comment\": \"## Response to Questions\\n1. Please refer to the second paragraph of Appendix E.1 in our paper.\\n2. Please read Sec. 3.2, specifically **the Phase 2: Main.SecAgg(\\u00b7) section**.\"}",
"{\"comment\": \"We express our sincere gratitude for your recognition and support!\\n1. About the expression of DP. What we want to express is that relying **solely** on the minimal noise added by DP is insufficient to thwart attacks, and our solution does not rely solely on DP to ensure privacy.\\n2. Compression-based baselines. These baselines are the closest in nature to our proposal, so we list them separately.\\n3. Threat model. Our threat models (please refer to line 145) are\\n 1. Semi-honest Model[1], also referred to as the \\\"honest-but-curious\\\" model, where participants in a protocol are assumed to follow the protocol correctly but may try to extract additional information from the data they receive. In this model, adversaries do not deviate from the protocol's rules or engage in malicious behavior like collusion or input manipulation. However, they may attempt to infer sensitive data from the information they have access to, using computational resources to gain an advantage. This model is often used in cryptography and secure multiparty computation, where participants are trusted to some extent but must be safeguarded from exploiting any inadvertent information leakage.\\n 2. Active Adversary Model[1], also known as the \\\"malicious adversary\\\" model. In this scenario, adversaries are not only capable of following the protocol but can also actively deviate from it, forging messages, manipulating inputs, or colluding with other participants to subvert the protocol. This model assumes that adversaries might engage in arbitrary, harmful behavior with the intention of compromising the security or correctness of the system.\\n\\n4. General Response 1 provides a simple example, which we hope will assist you in understanding Fig.4.\\n5. Data Accuracy. We add a small amount of DP noise to the original vector before PVF. After PVF and SAP, we get the aggregation result with a little noise, which is consistent with the LDP-based SAP. Due to the existence of SAP, a small $\\\\sigma$ is sufficient. Consistent with [2], we set the standard deviation of the noise to 0.0409 in Fig. 6.\\n6. Experimental issues. \\\\(\\\\lambda = 100\\\\) refers to the parameter \\\\(\\\\lambda\\\\) in PVF. In the \\\"End-to-end comparison\\\" in Sec. 5.3 (line 405) and \\\"Disrupting Variables Extension\\\" in Sec. 5.4 (line 446), we introduce the datasets and models used. In other experiments, such as those in Tab. 1 and Fig. 5, the user's original vectors are randomly generated. This is because these experiments primarily focus on the overhead of the secure aggregation phase, and the attributes of models (apart from the total parameter count) and datasets do not influence the results, as consistently pointed out in previous works [3][4].\\n## Reference\\n[1] Bonawitz, Keith, et al. \\\"Practical secure aggregation for privacy-preserving machine learning.\\\" proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.\\n\\n[2] Stevens, Timothy, et al. \\\"Efficient differentially private secure aggregation for federated learning via hardness of learning with errors.\\\" 31st USENIX Security Symposium (USENIX Security 22). 2022.\\n\\n[3] Hahn, Changhee, et al. \\\"VerSA: Verifiable Secure Aggregation for Cross-Device Federated Learning.\\\" IEEE Transactions on Dependable and Secure Computing 20.1 (2023): 36-52.\\n\\n[4] Liu, Ziyao, et al. \\\"Efficient dropout-resilient aggregation for privacy-preserving machine learning.\\\" IEEE Transactions on Information Forensics and Security 18 (2022): 1839-1854.\"}",
"{\"comment\": \"Thank you for your recognition and hope the response can address your concerns.\\n\\n1. Clarity and readability. General Response 1 provides a simple example, which we hope will assist you in better understanding PVF.\\n2. Impact of noise on accuracy. Note that Differential Privacy (DP) is an extension of PVF. The experiments on DP are not the primary focus of this paper. As demonstrated in the experiments in [1][2], the experiments shown in Fig. 6, conducted in two scenarios, are sufficient to demonstrate that the impact of DP noise on accuracy is negligible.\\n3. Scalability to Multiple Users. All experiments in our paper focus on exploring multi-user scenarios. Please refer to the setup of each experiment (typically indicated in the titles of figures and tables).\\n## Reference\\n[1] Stevens, Timothy, et al. \\\"Efficient differentially private secure aggregation for federated learning via hardness of learning with errors.\\\" 31st USENIX Security Symposium (USENIX Security 22). 2022.\\n\\n[2] Liu, Zizhen, et al. \\\"SASH: Efficient secure aggregation based on SHPRG for federated learning.\\\" Uncertainty in Artificial Intelligence. PMLR, 2022.\"}",
"{\"comment\": \"Dear authors,\\n\\nI apologize for the confusion of $q$. By $32$ I meant 32 bits. \\n\\nThanks for your answers. However your response on the use of the results of Regev's paper does not convince me for the following reason:\\n\\nThe main theorem (Thm 3.1) of Regev's paper is about the hardness of solving $LWE_{p,\\\\Psi}$ given a polynomial size sets of samples of the distribution $(A, Ax +e)$ where \\n - $A$ is chosen uniformly at random \\n - $e$ has a Gaussian distribution $\\\\Psi$\\n\\nNote that in the paper, the distribution of $e$ is independent of the matrix $A$. Therefore, in your paper, if $\\\\check{A}$ is also a random variable you cannot use $e' = \\\\check{A}e$ where $e$ is Gaussian because $e'$ is not Gaussian (i.e., only the conditional distribution of $e'$ given $\\\\check{A}$ follows a Gaussian distribution, but not $e'$ by itself). Therefore, it is my impression that you cannot apply this theorem. \\n\\nI stress that these aspects are very important for the security of the protocol and none of them have been explained in the manuscript.\"}",
"{\"title\": \"Refined Proof of Theorem 1\", \"comment\": \"Thank you for your detailed response and valuable comments.\\n\\nIn our previous revision of the PDF, we inadvertently omitted the content related to DVE in the proof of Theorem 1 (old H_1) for clarity. This was an oversight, and you may refer to the latest version of the document to review the complete revised proof of Theorem 1 in Appendix D.3 (p17). Here, we provide a brief summary of the changes.\\n\\nThe security framework we rely on is the Universal Composability (UC) framework. \\n\\n* Firstly, in Lemma 3, we provide theoretical support indicating that PVF does not leak private elements or their relationships, based on the hardness of the Learning With Errors (LWE) decision problem. And we have refined the proof of Theorem 1 ($H_3$).\\n\\n* Secondly, we have included a security analysis under the active adversary model. In Appendix D.2, we have added an introduction to cryptographic primitives, specifically symmetric authenticated encryption and digital signatures. And we have refined Figure 13 and the proof of Theorem 1 ($H_1$-$H_2$). It is evident that, in the active adversary model, the adversary's attempts to forge messages from other honest participants or to maliciously construct messages will not result in the leakage of user privacy. And the user privacy in the SAP protocol is ensured by the integrated SAP itself.\\n\\nIf you have any further questions, please do not hesitate to raise them.\"}",
"{\"summary\": \"This paper introduces a novel method called \\u03bb-SecAgg, which integrates a module named Partial Vector Freezing (PVF) into Secure Aggregation Protocols (SAPs) for federated learning. The main goal of this method is to reduce the computational overhead by \\u201cfreezing\\u201d most of the entries in user update vectors, allowing only a fraction (1/\\u03bb) of the original vector to be processed through secure aggregation. The frozen entries can later be \\u201cthawed\\u201d to recover the full aggregated vector, ensuring that no information is lost in the final aggregation. Additionally, the paper proposes a Disrupting Variables Extension (DVE) that enhances privacy by adding noise to the frozen entries using Differential Privacy (DP). The authors perform extensive empirical evaluations across seven baselines, demonstrating that PVF can achieve up to 99.5\\u00d7 speedup and 32.3\\u00d7 communication reduction without compromising user privacy or security.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Innovation: The concept of freezing and unfreezing vector entries in the context of secure aggregation is very novel. This approach effectively reduces the computational burden on SAP, which has been a significant bottleneck in real-world federated learning applications, especially for large-scale models such as Large Language Models (LLMs).\", \"comprehensive_evaluation\": \"The authors evaluate their approach on seven different baselines covering various secure aggregation protocols (e.g., homomorphic encryption-based, SMPC-based, mask-based). The experimental results show substantial improvements in computation time and communication cost.\", \"privacy_and_security\": \"The paper proves the privacy guarantees of \\u03bb-SecAgg under semi-honest and active adversary models through security analyses. In addition, the authors introduce extensions such as DVE, which further strengthens the privacy guarantees.\", \"weaknesses\": \"Clarity and readability: Although this paper presents a novel approach, some sections are dense and difficult to understand, especially the mathematical derivations and safety analyses. It is suggested that the authors could improve the readability of these sections by providing more intuitive explanations and breaking down the steps as much as possible. In addition, the readability of some diagrams and formulas (e.g., those in Sections 3 and 4) is too low, and it is suggested that the reader could improve them by simplifying them or providing more detailed explanations.\", \"impact_of_noise_on_accuracy\": \"Although the paper claims that the impact of DVE (adding noise to DP) on accuracy is negligible, the experimental results on the loss of accuracy due to DP noise are not detailed enough. It is suggested that the authors can add relevant experiments for this part.\", \"scalability_to_multiple_users\": \"This paper focuses on performance improvements for single users and servers, but does not discuss scalability to multiple users. It is suggested that the authors validate the approach of this paper in the context of multiple simultaneous users, especially with respect to communication overheads and system latency.\", \"questions\": \"This paper presents a novel and practical approach to reduce the computational overhead of secure aggregation in federated learning by proposing \\u03bb-SecAgg with partial vector freezing (PVF). The strengths of the method lie in its innovative design, theoretical rigor and comprehensive evaluation, showing significant performance improvements. However, there are areas that could benefit from further clarification, particularly in terms of readability, real-world evaluation, and the impact of noise on accuracy. It is recommended that the authors consider the above comments to further refine and optimize the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear authors,\\n\\nThanks for your reply. Here are some comments about it: \\n\\nThe sentence \\\"similar to many hybrid schemes combining mask and DP (Bonawitz et al., 2017, ...\\\" is not correct as (Bonawitz et al., 2017) does not use DP in its protocol. \\n\\nI would like to stress that providing security should not be an \\\"enhancement\\\" of the protocol, but the first basic property. In fact, the goal of Sec 3.3 seems to be to add additional security (a security property that was not yet there as we discussed before) in the case additional entries were compromised (without actually providing a motivation for such scenario). Therefore, the actual goal is currently very confusing. \\n\\nIn Eq. (10) you seem yo add Gaussian noise to private values which would somehow simultaneously satisfy DP (for which parameters $\\\\epsilon$ and $\\\\delta$ are not specified) and LWE hardness (which security parameters are also not specified). Note that the variance of the noise would affect both LWE security and DP privacy guarantees. The relation between the type of security and privacy and the variance required to satisfy them is not clear in the paper. From a differential privacy perspective, the variance of the noise require to satisfy acceptable guarantees increases as more linear relations are revealed. \\n\\nIn your case, as you reveal almost all linear relations to reconstruct (noisy) private values, the amount of noise that you would need would be similar to local DP, which is huge ($d$ times more than what you would require for secure aggregation that only reveals the sum, where $d$ is the dimension of your private vector). Moreover, the amount of noise required to achieve computational indistinguishabilty is even larger (in fact, much larger). This noise would largely impact your accuracy. In your experiments you seem to set the standard deviation of the noise to $0.040$ pointing to a paper but without explaining where does this come from. As said, from a DP point of view, you would be already required to add a prohibitive amount of noise if the dimension of private vectors is large. None of these precisions are included in your security analysis and no privacy (DP) analysis is done. \\n\\nFinally following your discussion with Reviewer zANE, I would like to stress that I agree with his concerns about malicious security (which, by the way, in the universal composability would require the modelling of an environment that is not in included your proof). Note that the fact that I currently focus in other (important) problems of the paper does not mean that I have not contested this point, which is currently a major issue.\"}",
"{\"summary\": \"The paper addresses the challenges of secure aggregation in federated learning, particularly the high computation costs associated with Secure Aggregation Protocols (SAPs). The paper introduces a novel approach called Partial Vector Freezing (PVF), designed to reduce computation without increasing communication overhead. In addition, the paper proposes the disrupting variable extension to PVF to support enhanced privacy. The extensive experiments show the effectiveness of the proposed proposal.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The PVF significantly compresses the length of the vector involved in SAP.\\n2. The disrupting variables extension method improves privacy, without the computational overhead.\\n3. The authors conduct extensive experiments\", \"weaknesses\": \"1. Lack of Novelty in the Proposed Solution:\\nWhile I appreciate the clarity and straightforwardness presented in your methodology, I am concerned about the apparent simplicity of the proposed solution. The approach, as described, seems to lack the level of innovation. Consider expanding on the theoretical background, comparing your method with others in detail, and emphasizing any novel insights or improvements that your solution offers.\\n2. Informality in Security Analysis:\\nThe security analysis section of your paper appears to be somewhat informal and lacks the rigor typically required for a comprehensive evaluation of a proposed system or method. Security is a critical aspect in many research domains, and a thorough, formal analysis is essential to establish trustworthiness and robustness. I recommend conducting a more structured and detailed security analysis, possibly incorporating formal security proofs, case studies, or simulations to demonstrate the effectiveness of your security measures.\", \"questions\": \"1. In practical applications, how should this value \\\\lambda be determined?\\n\\n2. Are there any fundamental differences between the aggregation method of k^i and that of y^i?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear authors and reviewer r5Dg,\\n\\nThanks for your detailed discussion and sorry for interrupting the conversation. \\n\\nI also agree with the reviewer r5Dg, i.e., I am also not convinced that the distribution of observations in your protocol is the same as the distribution required in Theorem 3.1 of (Regev. et al.).\\n\\nI think the misunderstanding stems from the author's statement \\\"$\\\\sum{\\\\mathbf{x}}^{i}$ does not compromise the privacy of any individual ${\\\\mathbf{x}}^{i}$, which is a fundational principle and consensus in secure multi-party computation\\\", which is not true.\\nThe sum of the local models definitely includes some information about the local model, and the amount of leaked information in secure aggregation protocol is $O(1/N)$ where $N$ is the number of models to be aggregated [1].\\n\\n[1] Elkordy, A., et al. \\\"How Much Privacy Does Federated Learning with Secure Aggregation Guarantee?.\\\" PETS. 2023.\"}",
"{\"title\": \"Response (Id GR2-8)\", \"comment\": \"Dear Reviewer r5Dg:\\n\\nBased on the questions you raised, we speculate that you may not have thoroughly reviewed the latest version of our PDF. In the following response, we will include relevant excerpts from the PDF where necessary.\\n\\n### **R1. The theoretical foundation of Remark 3 in our paper**\\nRemark 3 in our paper summarizes the content of \\\"*THEOREM 3.1 (MAIN THEOREM)*\\\" on page 20 of [1]. In simple terms, Regev demonstrated that when $\\\\alpha q > 2\\\\sqrt{v}$, or equivalently $\\\\sigma > \\\\frac{2\\\\sqrt{v}}{\\\\sqrt{2\\\\pi}}$, where $\\\\sigma = \\\\frac{\\\\alpha q}{\\\\sqrt{2\\\\pi}}$ (see II.B in [2]), solving the LWE problem would be equivalent to solving the Shortest Vector Problem, a well-known NP-hard problem. In our experiment (Fig. 6), $\\\\sigma$ is set to 8783 $(>\\\\frac{2 \\\\times \\\\sqrt{1000}}{\\\\sqrt{2\\\\pi}})$, which satisfies the security requirements.\\n### **R2. The standard deviation of $\\\\mathcal{X}$**\\nPlease refer to the description in R1. The standard deviation of the added noise must satisfy $(>\\\\frac{2 \\\\times \\\\sqrt{\\\\lambda}}{\\\\sqrt{2\\\\pi}})$. In our experiments, $\\\\lambda$ ranges from [100, 1000], so the **maximum** required minimum standard deviation is $\\\\frac{2 \\\\times \\\\sqrt{1000}}{\\\\sqrt{2\\\\pi}} = 25$. We set it to 8783 merely to demonstrate that even when noise far exceeding the security requirements is added, the impact on the model's accuracy remains negligible.\\n\\n### **About the Question**\\n*\\\"you would be already required to add a prohibitive amount of noise if the dimension of private vectors is large.\\\"*\\n \\nWe still do not understand this comment based on the response you provided. We have emphasized that the security foundation of this paper is **based on LWE, not DP**. Although both DP and LWE involve Gaussian noise, their principles for achieving privacy protection are **entirely different**. The LWE problem and the conditions under which it is difficult have been **clearly explained** in our paper. We kindly ask the reviewer to consult the definitions of DP and LWE and compare them to understand the differences between the two. Furthermore, based on your intuitive and simple explanation, we do not understand why the noise intensity in our proposed scheme (or in LWE problem) would be related to the vector dimension. We would appreciate it if the reviewer could provide a **formal** explanation or an **example** to clarify this, or perhaps refer to **relevant literature and theoretical foundations** to support this claim.\\n\\nIf you have any questions, please do not hesitate to raise them.\\n\\n### Reference\\n[1] Regev, Oded. \\\"On lattices, learning with errors, random linear codes, and cryptography.\\\" Journal of the ACM (JACM) 56.6 (2009): 1-40.\\n\\n[2] Marcolla, Chiara, et al. \\\"Survey on fully homomorphic encryption, theory, and applications.\\\" Proceedings of the IEEE 110.10 (2022): 1572-1609.\"}",
"{\"comment\": \"I have already read that $\\\\check{A}$ is public in the paper and in your previous comment. However, it does not mean that your use of Theorem 3.1 is correct.\", \"let_me_be_more_specific\": \"The LWE problem stated in Regev's paper defines observations as pairs $(\\\\mathbf{a}, \\\\mathbf{a}^\\\\top \\\\mathbf{x} + e)$ where $\\\\mathbf{x}$ is your secret vector of dimension $\\\\lambda$, $\\\\mathbf{a}$ is a random vector sampled uniformly at random from $\\\\mathbb{Z}_p^{\\\\lambda}$ and $e$ is a scalar sampled from the gaussian distribution $\\\\Psi$. For all observations $e$'s are i.i.d samples of a given $\\\\Psi$. \\n\\nIn your paper you have a matrix $\\\\check{A} \\\\in \\\\mathbb{Z}\\\\_p^{(\\\\lambda-1) \\\\times \\\\lambda }$. The pair $(\\\\check{A}, \\\\check{A}\\\\mathbf{x} + \\\\mathbf{e}')$ cannot be used as a single observation, because observations are vector-vector products and not matrix-vector products. This in itself is not a problem as you can define the set of $\\\\lambda-1$ observations of the form $(\\\\check{A}\\\\_{i,:}, \\\\check{A}\\\\_{i,:}\\\\mathbf{x} + \\\\mathbf{e}'\\\\_i )$, where for all $i \\\\in \\\\\\\\{1,\\\\dots, \\\\lambda-1\\\\\\\\}$, $\\\\check{A}\\\\_{i,:}$ is the $i$th row of $\\\\check{A}$ and $\\\\mathbf{e}'\\\\_i = \\\\check{A}_{i,:} \\\\mathbf{e}$ for a vector $\\\\mathbf{e}$ that follows a multivariate Gaussian distribution. \\n\\nLets examine the distribution of your observations. Each vector $\\\\check{A}\\\\_{i,:}$ is a random vector as $\\\\check{A}$ is a random matrix in the adequate domain. However, your distribution of pairs cannot be stated as the distributions of the paper for the following reasons: \\n- each sample $\\\\mathbf{e}'\\\\_i$ has a different Gaussian distribution dependent on $\\\\check{A}_{i,:}$, while in Regev's paper they should all be samples from the same distribution\\n- all samples $\\\\mathbf{e}'\\\\_i$ are correlated with each other, as they all depend on $\\\\mathbf{e}$. However, in Regev's paper all samples of $\\\\mathbf{e}'\\\\_i$ must be independent. \\n\\nTherefore, your protocol does not appear to meet the preconditions to apply Theorem 3.1 of Regev's paper.\"}",
"{\"comment\": \"Dear Reviewer zANE:\\n\\n### **1. Privacy in the Active Adversary Model**\\nAs in our response to this comment (https://openreview.net/forum?id=E1Tr7wTlIt¬eId=iog03MH6Wy), symmetric authenticated encryption and digital signatures are sufficient to ensure that, in the event of malicious participants attempting the malicious behaviors mentioned in our previous response **during PVF**, the protocol will be terminated and will not lead to privacy leakage. This security assurance is also evident in the security analysis of other SAPs, such as $H_2,H_4,H_5,H_6,H_8$ of the proof of Theorem IV.4 in [1]. And this point was not contested by Reviewer r5Dg. \\n\\nWhile additional steps of SAP are indeed necessary to achieve security in the active adversary model (such as *ConsistencyCheck*), **note that PVF does not alter SAP or add additional communication rounds, and it is decoupled from the specific SAP**. Consequently, the security of the remaining steps except PVF is inherently ensured by the integrated SAP itself.\\n\\n### **2. Modification of $H_1$**\\nThank you for highlighting the lack of clarity in our writing. In PVF, **the only plaintext involved is $\\\\mathbf{y}^i$** (the simulation of other plaintexts in SAP is handled by the integrated SAP). Accordingly, we have revised $H_1$ as follows (we have updated in the latest version):\\n\\n*$H_1$: This hybrid is distributed similarly to the previous one, except for the following modifications. $SIM$ obtains $\\\\sum_{i \\\\in \\\\mathcal{U}' \\\\backslash \\\\mathcal{C}}\\\\mathbf{x}^{i}$ by calling ${Ideal} _ {{\\\\{ \\\\mathbf{x}^{i}\\\\}} _ {i \\\\in \\\\mathcal{U}\\\\backslash \\\\mathcal{C}}}( {\\\\mathcal{U}' \\\\backslash \\\\mathcal{C}} )$. $SIM$ aborts if there is an illegal request. We replace the ciphertexts of $\\\\{\\\\mathbf{y}^{i}\\\\} _ {i\\\\in\\\\mathcal{U}}$ with the ciphertexts of uniformly random vectors $\\\\{\\\\mathbf{w}^{i}\\\\} _ {i\\\\in\\\\mathcal{U}}$ that satisfy $\\\\sum_{i \\\\in \\\\mathcal{U}' \\\\backslash \\\\mathcal{C}}\\\\mathbf{w}^{i} = \\\\sum_{i \\\\in \\\\mathcal{U}' \\\\backslash \\\\mathcal{C}}\\\\mathbf{y}^{i}$. $\\\\sum_{i \\\\in \\\\mathcal{U}' \\\\backslash \\\\mathcal{C}}\\\\mathbf{y}^{i}$ can be can be computed from Eq. 9 based on $\\\\sum_{i \\\\in \\\\mathcal{U}' \\\\backslash \\\\mathcal{C}}\\\\mathbf{x}^{i}$. The IND-CPA and IND-CTXT security of symmetric authenticated encryption guarantees the distribution of this hybrid is indistinguishable from the previous one.*\\n\\nThe output of SIM is identical to that of REAL, and no additional private information about the honest users is disclosed.\\n\\nIf you have any other questions, please let us know.\\n\\n### Reference\\n[1] Liu, Ziyao, et al. \\\"Efficient dropout-resilient aggregation for privacy-preserving machine learning.\\\" IEEE Transactions on Information Forensics and Security 18 (2022): 1839-1854.\"}",
"{\"comment\": \"Dear Reviewer zANE:\\n\\nIn Lemma 3, we offer theoretical support demonstrating that PVF does not leak private elements or their relationships, based on the hardness of the Learning With Errors (LWE) decision problem. And we have refined the proof of Theorem 1 ($H_3$). We hope this addresses your concerns regarding the privacy. A concise summary of the progress made in the rebuttal can be found in General Response 2. If you have any further questions, please do not hesitate to contact us.\\n\\nWe look forward to receiving your updates.\"}",
"{\"comment\": \"1. Your security proof does not account for actively malicious behavior. I am not the first reviewer to point this out (e.g., see https://openreview.net/forum?id=E1Tr7wTlIt¬eId=iog03MH6Wy). See for instance the Theorem A.2 in your reference [2] for additional steps required to achieve malicious security in secure aggregation protocols.\\n2. In my understanding of your definition of H1, you \\u201creplace the plaintext with dummy\\nmessages\\u201d. This will lead to a different output of the protocol, because it does not depend on the real inputs anymore. In the hybrids you reference from [2], the replacement is done in such a way that this does not lead to a different output (e.g., only encrypted messages between parties controlled by the simulator).\"}",
"{\"comment\": \"1. Firstly, in Theorem 1, we explicitly state that $\\\\mathbf{\\\\lambda > 2}$. And **you did not compute $\\\\mathbf{y}$ using the correct method**, please refer to General Response 1 for clarification.\\n\\n2. Secondly, the conditional probability of guessing an element (like $x_1$) is inherently $\\\\frac{1}{p}$. **Where does $\\\\frac{1}{p^2}$ come from?**\"}",
"{\"summary\": \"The paper studies how to reduce the computational cost in privacy-preserving federated learning (FL) with secure aggregation (SecAgg). SecAgg is a primitive that improves the privacy-utility trade-off in FL as it hides individual model updates sent to the server. However, most efficient applications require to mask each parameter of the model with random noise, incurring into a large computational cost if models are big.\\n\\nThe current work proposes a technique that reduces the computational cost by only performing SecAgg to a subset of model parameters, while still recovering the (claimed to be) private aggregation of the entire model.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Reducing the communication cost in privacy preserving ML an interesting topic.\", \"The presentation of the protocol is fairly clear.\"], \"weaknesses\": \"# Main Weaknesses\\n\\nThe major weakness of the protocol is the **lack of any standard notion of security**. The protocol is based on the fact that revealing the undetermined system of linear equations $\\\\breve{A}x = y$ where $\\\\breve{A}$ and $y$ are public does not compromise the privacy of $x$. From a security point of view, letting the adversary gain the knowledge of $\\\\breve{A}x$ is **completely unsafe**. A clear example is already given the detailed comments subsection below for certain choices of $\\\\breve{A}$. \\n\\nEven if the paper proposes some defenses to avoid the most obvious threats in the choice of $\\\\breve{A}$ (i.e. if the system of equations already directly exposes some coordinates of $x$), these defenses are only a minor improvement in the overall security. The protocol is in fact insecure for any $\\\\breve{A}$. For example, consider that a party joins the aggregation protocol with a vector $x = (x_1, 0, \\\\dots, 0)$ (i.e., a vector where $x_1$ is the only non-zero value). In this case, values of $\\\\breve{A}x$ will always be multiples of $x_1$. Therefore, the claim of Theorem 1 does not hold: multiples of $x_1$ are *distinguishable* from uniformly random numbers, contrary to what is claimed in hybrid 1 ($H_1$) in the proof of Theorem 1 (Appendix D.1). This renders the proof of Theorem 1 incorrect. \\n\\nThe computational improvements of this protocol come from the insecure modification described above. This makes the protocol inapplicable. Moreover, the attempts to further \\\"complicate\\\" the linear equations by the presented enhancements also follow an unsafe methodology lacking proper proofs (see my detailed comments below). \\n\\nIn addition to the above, the work ignores important lines of work in compression under privacy constraints (e.g. see [R1-R5] below) and differentially private-based aggregation (e.g., by the use of correlated noise [R6-R8]), directly related to the current contribution. \\n\\n\\n# Detailed Comments \\n\\n- Page 2, Section 2: \\n - \\\"Mask-based\\\" approaches are an instantiation of \\\"SMPC-based\\\" approaches. \\n - \\\"(i) improving the masking mechanism\\\": it is not clear what this means \\n - \\\"Note that the security of FL remains an open issue\\\": this is too broad and it is not clear what \\\"security\\\" means in this context\\n- Page 3: \\n - Section 2: \\\"However, their ability to prevent poisoning attacks is limited (Ma et al., 2023)\\\": not sure how the reference is relevant here. Does (Ma et al., 2023) provides evidence about this statement? \\n - Section 3, \\\"ultimately imposing significant computational burdens\\\": \\\"burdens\\\" $\\\\rightarrow$ \\\"burden\\\"; is this computational burden significant with respect to the computational cost of local training steps required by ML? \\n- Page 4: \\n - Def 1: \\\"... where $AK$ denotes the additional knowledge ..\\\": so far no mention of \\\"additional knowledge has been made\\\", so it is not clear to what this refers. Also, it should be explicitly clarified that $rank(A, Ax)$ means the rank of the horizontal concatenation of $A$ and $Ax$. \\n - \\\"... rendering it impossible to determine that specific confidential vector.\\\" this is an overly strong statement (at least if no additional context is given). Consider for example that $A$ equals the identity matrix. Indeed $\\\\breve{A}x$ has infinite solutions (all possible values of the removed coordinate of $x$). However, almost all coordinates of $x$ will be revealed if $\\\\breve{A}x$ is revealed. \\n- Page 5, Sec 3.3: \\n - \\\"... even if some secagg entries are compromised\\\": There is no motivation of the extra defense, explaining how these entries would be compromised. \\n - \\\"complicating the relationships among entries and further enhancing privacy\\\": This lacks a proof. Providing privacy by obscurity (i.e, providing an obfuscation technique without proving that indeed it reduces to hard problem for the adversary) is a bad practice in the field of security.\\n\\n\\n# References \\n\\n[R1] Bassily, Raef, and Adam Smith. \\u201cLocal, Private, Efficient Protocols for Succinct Histograms.\\u201d In Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing, 127\\u201335. STOC \\u201915. New York, NY, USA: Association for Computing Machinery, 2015. https://doi.org/10.1145/2746539.2746632.\\n\\n[R2] Feldman, Vitaly, and Kunal Talwar. \\u201cLossless Compression of Efficient Private Local Randomizers.\\u201d In Proceedings of the 38th International Conference on Machine Learning, 3208\\u201319. PMLR, 2021. https://proceedings.mlr.press/v139/feldman21a.html.\\n\\n[R3] Liu, Yanxiao, Wei-Ning Chen, Ayfer \\u00d6zg\\u00fcr, and Cheuk Ting Li. \\u201cUniversal Exact Compression of Differentially Private Mechanisms.\\u201d arXiv, May 28, 2024. https://doi.org/10.48550/arXiv.2405.20782.\\n\\n[R4] Shah, Abhin, Wei-Ning Chen, Johannes Ball\\u00e9, Peter Kairouz, and Lucas Theis. \\u201cOptimal Compression of Locally Differentially Private Mechanisms.\\u201d In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, 7680\\u20137723. PMLR, 2022. https://proceedings.mlr.press/v151/shah22b.html.\\n\\n[R5] Triastcyn, Aleksei, Matthias Reisser, and Christos Louizos. \\u201cDP-REC: Private & Communication-Efficient Federated Learning.\\u201d arXiv, December 7, 2021. https://doi.org/10.48550/arXiv.2111.05454.\\n\\n[R6] Imtiaz, Hafiz, Jafar Mohammadi, and Anand D. Sarwate. \\u201cDistributed Differentially Private Computation of Functions with Correlated Noise.\\u201d arXiv, February 22, 2021. https://doi.org/10.48550/arXiv.1904.10059.\\n\\n[R7] Kairouz, Peter, Brendan Mcmahan, Shuang Song, Om Thakkar, Abhradeep Thakurta, and Zheng Xu. \\u201cPractical and Private (Deep) Learning Without Sampling or Shuffling.\\u201d In Proceedings of the 38th International Conference on Machine Learning, 5213\\u201325. PMLR, 2021. https://proceedings.mlr.press/v139/kairouz21b.html.\\n\\n[R8] Sabater, C\\u00e9sar, Aur\\u00e9lien Bellet, and Jan Ramon. \\u201cAn Accurate, Scalable and Verifiable Protocol for Federated Differentially Private Averaging.\\u201d Machine Learning 111, no. 11 (November 1, 2022): 4249\\u201393. https://doi.org/10.1007/s10994-022-06267-9.\", \"questions\": \"Could please you address the points raised in \\\"Main Weaknesses\\\" above?\", \"in_addition_to_these_questions\": [\"Could you illustrate in more detail what which masking operations of the compared SecAgg protocols do you avoid by the use of your proposal? It seems that either performing a matrix-vector multiplication or masking operations does not eliminate the dependency of the dimension of a vector $x$ in the computation.\", \"If we compare the computational cost of the SecAgg protocol and the computational cost of a local training step for a client, what proportion of the computation the SecAgg overhead represents? How does this change for different models?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Extra question:\\n\\n- What is the exact requirement of \\\"the size of $q$ is polynomial in $v$\\\"? $v$ here is a variable that changes with the learning task. However, $q$ seems to be fixed to 32 in your experiments. I get the impression that the concrete security of your protocol may depend on $q$.\"}",
"{\"comment\": \"Dear authors,\", \"i_have_follow_up_comments_with_questions\": \"1- Can you please specify which result within the paper of Regev[2] are you using to determine the variance? My comment for this paragraph is that you are still not explaining what security you have. I.e., what is the hardness of solving the Shortest Vector Problem? Every problem also is associated to a security parameter (e.g., \\\"exponential in $\\\\lambda$ where $\\\\lambda=...$). I mean, one can understand the hardness of a problem when used with standard parameters (e.g., large prime groups or fields). However, here, these precisions are not given. \\n\\n2- There are no details about the parts of (Stevens et al.) [3] that are equivalent or similar to your protocol in such a way that allows you to use similar parameters. Therefore it is still not clear why the final variance of you report would be correct for your protocol. You need to develop why LWE meets the conditions of Regev [2] within your protocol for a given variance and why this variance is correct. In this way it is possible to assess if \\\"$\\\\sigma$ is set to 8783, which can meet security requirements in Remark 3 and is equivalent to adding noise with a standard deviation of 0.0409\\\" is correct. It is not possible to provide an accurate assessment if key elements for the security of the protocol are not provided in the paper.\", \"answer_to_your_question\": \"The noise that would be required for DP for the amount of information that you reveal is very large. DP guarantees are weaker than the kind of indistinguishability provided by LWE security. Therefore, the noise should be larger. This means that if DP noise is already prohibitive, then the noise of your protocol must be even larger.\"}",
"{\"title\": \"Security Problems\", \"comment\": \"Let me answer your question regarding the major concerns (i.e., the insecurity of the protocol).\\n\\nYou ask \\\"**How can a vector be a multiple of a scalar?**\\\": the answer to your question is right there in the example you provide. It is clear that all the elementss of vector $\\\\mathbf{y}$ are multiple of $7$ (i.e., $42= 7\\\\times 6$, $56=7\\\\times 8$). In fact this will happen for any $\\\\check{A}$ you choose. Therefore, the distribution of $\\\\mathbf{y}$ will assign probability equal to $0$ to all numbers that are not multiples of $7$. This distribution is then **clearly** distinguishable from uniformly random numbers, making (as said in my review) your claim for hybrid 1 (and therefore Theorem 1) incorrect. \\n\\nThis is one way to easily illustrate the security problems, showing the strong dependence between the private data and $\\\\mathbf{y}$. Other broader arguments that align with mine are the ones provided by Reviewers zANE and SAYu.\"}",
"{\"title\": \"General Response 1: A simple example of PVF\", \"comment\": \"Here we provide a simple example of PVF to help the reviewers gain a clearer understanding:\\nAssume the model vectors for two users are $\\\\mathbf{x^1}=(1,2,3,4,5,6,7,8,9)$ and $\\\\mathbf{x^1}=(9,8,7,6,5,4,3,2,1)$, we set $\\\\lambda=3$, meaning we divide $\\\\mathbf{x}$ into $\\\\frac{9}{\\\\lambda}=3$ groups. Following PVF, we generate the public parameters:\\n$$\\n \\\\mathbf{A}=\\n\\\\begin{pmatrix}\\n6 & 9 & 5 \\\\\\\\\\\\\\\\\\n8 & 4 & 1 \\\\\\\\\\\\\\\\\\n5 & 7 & 5 \\\\\\\\\\\\\\\\\\n\\\\end{pmatrix},\\n$$\\n$$\\n \\\\mathbf{\\\\check{A}}=\\n\\\\begin{pmatrix}\\n6& 9 &5 \\\\\\\\\\\\\\\\\\n8& 4& 1 \\\\\\\\\\\\\\\\\\n\\\\end{pmatrix},\\n$$ and \\n$$\\\\mathbf{\\\\alpha}= (5,7, 5).$$\\nIn the secure aggregation phase, user 1 obtains the freezing vectors:\\n$$\\n\\\\begin{array}{lll}\\n\\\\mathbf{y}_1^1=\\\\mathbf{\\\\check{A}} \\\\mathbf{x}_1^1=(39,19), \\\\\\\\\\\\\\\\\\n\\\\mathbf{y}_2^1=\\\\mathbf{\\\\check{A}} \\\\mathbf{x}_2^1=(99,58), \\\\\\\\\\\\\\\\\\n\\\\mathbf{y}_3^1=\\\\mathbf{\\\\check{A}} \\\\mathbf{x}_3^1=(159,97),\\n\\\\end{array}\\n$$\", \"and_the_key_vector\": \"$$\\\\mathbf{k}^1 =(k_1^1,k_2^1,k_3^1)=(\\\\mathbf{\\\\alpha}\\\\mathbf{x}_1^1,\\\\mathbf{\\\\alpha}\\\\mathbf{x}_2^1,\\\\mathbf{\\\\alpha}\\\\mathbf{x}_3^1)=(34,85,136).$$\", \"user_2_obtains\": \"$$\\n\\\\begin{array}{lll}\\n\\\\mathbf{y}_1^2=\\\\mathbf{\\\\check{A}} \\\\mathbf{x}_1^2=(161,111),\\\\\\\\\\\\\\\\\\n\\\\mathbf{y}_2^2=\\\\mathbf{\\\\check{A}} \\\\mathbf{x}_2^2=(101,72),\\\\\\\\\\\\\\\\\\n\\\\mathbf{y}_3^2=\\\\mathbf{\\\\check{A}} \\\\mathbf{x}_3^2=(41,33),\\n\\\\end{array}\\n$$\", \"and_the_key_vector_is\": \"$$\\\\mathbf{k}^2 =(k_1^2,k_2^2,k_3^2)=(\\\\mathbf{\\\\alpha}\\\\mathbf{x}_1^2,\\\\mathbf{\\\\alpha}\\\\mathbf{x}_2^2,\\\\mathbf{\\\\alpha}\\\\mathbf{x}_3^2)=(136,85,34).$$\", \"the_secure_aggregation_of_the_key_vectors_gives\": \"$$\\\\mathbf{k}^1+\\\\mathbf{k}^2=(170,170,170).$$\\n\\nFor any group (taking the first group as an example), the server cannot deduce $\\\\mathbf{x}^i$ from $\\\\mathbf{y}^i$, but when given $\\\\\\\\sum\\\\mathbf{k}$, the server can solve the following system of equations:\\n$$\\n\\\\begin{array}{lll}\\nA _ {1,1} \\\\sum \\\\mathbf{x}_1+ A _ {1,2} \\\\sum \\\\mathbf{x}_2+A _ {1,3} \\\\sum \\\\mathbf{x}_3=39+161=200 \\\\\\\\\\\\\\\\\\nA _ {2,1} \\\\sum \\\\mathbf{x}_1+ A _ {2,2}\\\\sum \\\\mathbf{x}_2+A _ {2,3}\\\\sum \\\\mathbf{x}_3=19+111=130 \\\\\\\\\\\\\\\\\\nA _ {3,1} \\\\sum \\\\mathbf{x}_1+ A _ {3,2}\\\\sum \\\\mathbf{x}_2+A _ {3,3}\\\\sum \\\\mathbf{x}_3=34+136=170\\n\\\\end{array}\\n$$\\nto obtain $\\\\sum \\\\mathbf{x}=(10,10,10)$. Similarly, the aggregated values for other groups can be obtained.\\n\\nIt is evident that Sec. 3.2 and Fig. 4 provide a detailed, general derivation of the above example. Although it may appear complex, each equation is essential for explaining the PVF computation process. We kindly ask the reviewers to read it with patience. If you have any questions, please do not hesitate to raise them.\"}",
"{\"title\": \"Response (Id GR2-18)\", \"comment\": \"Dear Reviewer SAYu:\\n\\nThanks for your reply.\\n\\nThe primary concern does not stem from the statement \\\"$\\\\sum \\\\mathbf{x}^i$ does not compromise the privacy of any individual $\\\\mathbf{x}^i$, which is a foundational principle and consensus in secure multi-party computation.\\\" This is a privacy guarantee for secure aggregation, **not** the theoretical foundation of our scheme. We have consistently focused on demonstrating that PVF **does not compromise the security** of secure aggregation. \\n\\nInvestigating vulnerabilities and defenses within secure aggregation protocols is beyond the scope of our work. As we pointed out in our document (line 105):\\n\\n*Note that the security of FL remains an open issue. SAPs, though unable to fully guarantee FL security at present, remain a promising direction worth exploring. The main objective of our work is to **reduce the masking-related overhead of secure aggregation, thereby making it more applicable in practice**.*\"}",
"{\"summary\": \"The authors present a new system to improve the computational overhead of secure aggregation using a new approach called Partial Vector Freezing. This approach reduces the number of entries processed in the secure aggregation protocol, by projecting chunks of the client input vector onto a different space, and only aggregating $1/\\\\lambda$ of the entries of each chunk securely and sending the rest of the entries in the clear. The server aggregates the entries from all clients and recovers the original input vectors by projecting the inputs back to the original space. The paper further bolsters privacy through the Disrupting Variables Extension, which applies noise calibrated for Local Differential Privacy to frozen vectors. Experimental results demonstrate substantial computation improvements compared to state-of-the-art secure aggregation protocols.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The focus of privacy-preserving federated learning is a crucial topic\", \"Extensive evaluation that covers a wide range of existing secure aggregation protocols\"], \"weaknesses\": [\"The approach impacts the robust privacy guarantees traditionally upheld by state-of-the-art secure aggregation protocols. These protocols typically ensure that an adversary gains no additional information about the inputs of honest clients beyond what is inferred from the aggregated output. However, Partial Vector Freezing (PVF) significantly reduces this privacy. As pointed out by the authors, it is possible for the server to learn whether two clients have similar vector chunks. Although the authors propose a mitigation strategy through Local Differential Privacy to reduce the detection of exact matches, this measure does not fully mitigate the issue of input privacy. The noised client inputs may still leak partial information that allows the server to deduce similarities between inputs. Given this trade-off, the computational gains provided by PVF do not justify the notable privacy impact. For instance, in the context of PracAgg, the masking computation is relatively lightweight. It involves field operations and pseudorandom generator evaluations, typically implemented with efficient cryptographic functions like AES. Additionally, the more computationally intensive pairwise key agreements are independent of the vector size and remain necessary regardless of the implementation of PVF.\", \"Another concern is the soundness of the security proof presented in Theorem 1. Specifically, the claim that the protocol execution is indistinguishable from random simulation seems to be inaccurate. The distribution of Hybrid 1 is not indistinguishable from that of Hybrid 0, as the distribution of frozen vectors $y_i$ does not exhibit properties of uniformly sampled vectors. While the random vectors are sampled uniformly from $\\\\mathbb{Z}_p$, the frozen vectors in the protocol are the actual inputs masked with centered Gaussian noise of bounded variance. This results in a non-uniform distribution over $\\\\mathbb{Z}_p$ undermining the indistinguishability between the two hybrids. Furthermore, other parts of the security proof are incomplete. For instance, in Hybrid 3, it is stated that the adversary-controlled clients $\\\\mathcal{C}$ call the ideal functionality. However, in simulation-based proofs, it is typically the simulator, not the adversary, that has direct access to the ideal functionality. Clarifying this aspect would strengthen the proof\\u2019s rigor and ensure alignment with standard cryptographic practices.\"], \"questions\": \"1. The baseline runtime figures for the secure aggregation protocols presented in Figure 1 and Table 1 appear notably higher than those reported in related literature. For instance, in the case of PracAgg with a vector length of 100k elements, Figure 1 shows a client runtime of 14 seconds and a server runtime of 140 seconds. In contrast, the original paper by Bonawitz et al. (2017) reports significantly lower runtimes for similar conditions, with client runtimes around 300 milliseconds (Figure 6a) and server runtimes at most 5 seconds (Figure 7a). Could you clarify the reasons for this discrepancy in runtime comparisons?\\n2. Could you provide a more detailed analysis of the privacy impact of your scheme, particularly focusing on the amount of differentially private noise that would be sufficient to mitigate privacy risks effectively? A clearer discussion on how the noise level was determined and its implications on both privacy and utility would be valuable.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}",
"{\"title\": \"Response part I (Id GR2-6)\", \"comment\": \"Dear Reviewer r5Dg:\\n\\nWe greatly appreciate your meticulous and insightful feedback, which will be invaluable in refining our work. We hope our response addresses your concerns.\\n\\n### **R1. Discussion on Combining DP in PracAgg**\\nPlease refer to Appendix A in [1]:\\n\\n*\\\"While secure aggregation alone may suffice for some applications, for other applications stronger guarantees may be needed, as indicated by the failures of ad-hoc anonymization techniques [6, 45, 52], and by the demonstrated capability to extract information about individual training data from fully-trained models (which are essentially aggregates) [26, 50, 51].\\nIn such cases, secure aggregation composes well with differential privacy [21]. This is particularly advantageous in the local privacy setting [20], which offers provable guarantees for the protection of individual training examples [1, 3] even when the data aggregator is not assumed to be trusted [25, 53].\\\"*\\n\\n### **R2. Presentation of Sec. 3.3**\\nFirst, we would like to provide an explanation for the sentence in the original text: \\n*\\\"to ensure the privacy of frozen entries even if some secagg entries are compromised.\\\"* \\nSecagg entries refer to the elements that need to undergo SAP, such as $\\\\mathbf{k}^i$ in General Response 1. Frozen entries refer to $\\\\mathbf{y}^i$. In Sec. 3.2, the security of $\\\\mathbf{x}^i$ relies on the hardness of determining a specific solution to an under-determined system of linear equations, meaning **whether frozen entries can ensure the privacy of $\\\\mathbf{x}^i$ depends on the security of secagg entries**. This is exactly what you have mentioned: the Main PVF may lead to the leakage of correlations between entries. To address this vulnerability, we provide an imporved version in Sec. 3.3.\\n\\nRegarding the inappropriate use of the term \\\"enhancement\\\", we have revised the phrasing in the latest version of the document.\\n\\nFurthermore, we have improved the presentation of Sec. 3. In the new PDF, Sec. 3 outlines the entire design process, starting from the **motivation** behind PVF (Sec. 3.1), to the introduction of a **foundational version** (Sec. 3.2), and then to the **improved version** (Sec. 3.3). The foundational version exposes the linear relationship between elements, and thus, in the improved version, we introduce a new method to address this issue. In the improved version, PVF does not disclose any information about $\\\\mathbf{x}^i$ except for $\\\\sum\\\\mathbf{x}^i$.\\n\\n### **R3. LWE Parameters**\\nIt is important to clarify that the noise added in Eq. (10) is **not** based on the principles of Local Differential Privacy (LDP), although it may appear similar. Instead, it is based on Learning With Errors (LWE) (Lemma 3). We have revised the ambiguous parts in the original PDF to eliminate any confusion. \\n\\nThe security parameters of LWE are ($v, q, \\\\sigma$), where $v$ denotes the width of $\\\\check{\\\\mathbf{A}}$, $q$ represents the size of the input space, and $\\\\sigma$ is the standard deviation of $\\\\mathcal{X}$. \\nThe relationship between the LWE parameters and security can be found in Remark 3 of Appendix D.3 in the latest PDF:\\n\\n*Regev[2] shows that if the size of $q$ is polynomial in $v$ and $\\\\mathcal{X}$ is a discrete Gaussian distribution on $\\\\mathbb{F}_q$ with standard deviation $\\\\sigma > \\\\frac{2\\\\sqrt{v}}{\\\\sqrt{2\\\\pi}}$, the LWE decision problem is at least as hard as the LWE search problem and solving the LWE search problem can be reduced to solving the Shortest Vector Problem. In DVE, $v=\\\\lambda$, and we use $\\\\mathbb{Z}_p$ as $\\\\mathbb{F}_q$.*\\n### **R4. The standard deviation of $\\\\mathcal{X}$**\\nIn the original presentation, our focus was on aligning the noise with that added in [3] to assess its impact on model accuracy, specifically by adding noise with the same standard deviation of 0.0409 to the aggregation results (as mentioned in the final paragraph of Sec. 5.1.1 of [3]).\\n\\nDue to the varying sizes of the input space ($q$) and the matrix width ($v$), adding noise of the same magnitude can impact model accuracy to **different extents**. Intuitively, adding noise of the same magnitude in a 16-bit space will have a much smaller impact on accuracy than in a 32-bit space. Therefore, for consistency, we determine the noise added to the input space according to the noise added to the aggregation result (**in float**) and the security requirements of LWE.\\n\\nWe have included the missing experimental details in the latest PDF (Sec. 5.4):\\n\\n*In the 32-bit input space, the $\\\\sigma$ is set to 8783 $(>\\\\frac{2 \\\\times 1000}{\\\\sqrt{2\\\\pi}})$, which can meet security requirements in Remark 3 and is equivalent to adding noise with a standard deviation of 0.0409 to the aggregation result (float).*\"}",
"{\"title\": \"Response part II\", \"comment\": \"### **R3. Security provided by the Integrated SAP**\\nWe believe that after reading R1, you will clearly understand the relationship between PVF and the integrated SAP. PVF and the integrated SAP operate independently and do not interfere with each other. They are completely decoupled. \\n\\nTherefore, when proving security, we have omitted the security analysis of the aggregation of $k^i$, **as the security of the aggregation process for $k^i$ is handled by the integrated SAP, not by PVF**. This is why we can assert that \\\"*the security of the remaining steps, except PVF, is inherently ensured by the integrated SAP itself*\\\".\\n\\nIf you believe there are any deficiencies in our approach, please feel free to share your insights.\\n\\n### Reference\\n[1] Aono, Yoshinori, et al. \\\"Privacy-preserving deep learning via additively homomorphic encryption.\\\" IEEE transactions on information forensics and security 13.5 (2017): 1333-1345.\\n\\n[2] Bonawitz, Keith, et al. \\\"Practical secure aggregation for privacy-preserving machine learning.\\\" proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.\"}",
"{\"comment\": \"Thank you for your response and clarification.\\n\\nPlease refer to Eq. 8 in Sec. 3.2. $\\\\mathbf{x}$ and $\\\\mathbf{k}$ are both free variables. If we consider the underlined part of Eq. 8 as a whole, the number of known equations in each group is $\\\\lambda-1$, while the number of unknowns is $2\\\\lambda$.\"}",
"{\"title\": \"Response part II (Id GR2-6)\", \"comment\": \"### **R5. Modeling malicious participants**\\nWe have added the modeling of malicious participants in the active adversary model, as detailed in Appendix D.3. Specifically, we outline potential active attacks that a malicious participant might execute during the PVF process and describe how we defend against these attacks using symmetric authenticated encryption and digital signatures.\\n### **Question**\", \"we_are_unable_to_fully_comprehend_the_meaning_of_the_following_sentence_in_your_comment\": \"*\\\"you would be already required to add a prohibitive amount of noise if the dimension of private vectors is large.\\\"* \\n\\nCould you kindly elaborate on the relationship between the required noise magnitude and the dimensionality of the private vectors? Additionally, regarding the relationship between the standard deviation of noise and the hardness of LWE, please refer to **R3** of our response.\\n\\nIf you have any questions, please do not hesitate to raise them. \\n\\nAuthors\\n\\n### **Reference**\\n[1] Bonawitz, Keith, et al. \\\"Practical secure aggregation for privacy-preserving machine learning.\\\" proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.\\n\\n[2] Regev, Oded. \\\"On lattices, learning with errors, random linear codes, and cryptography.\\\" Journal of the ACM (JACM) 56.6 (2009): 1-40.\\n\\n[3] Stevens, Timothy, et al. \\\"Efficient differentially private secure aggregation for federated learning via hardness of learning with errors.\\\" 31st USENIX Security Symposium (USENIX Security 22). 2022.\"}",
"{\"title\": \"Response (Id GR2-14)\", \"comment\": \"Dear Reviewer r5Dg:\\n\\nThank you for your more detailed explanation. We hope the following response will address your concerns.\\n\\n### **Notation**\\nWe consider $\\\\langle\\\\mathbf{A}, \\\\mathbf{e}\\\\rangle$ to be an LWE instance, and $\\\\langle \\\\mathbf{a _ i}, \\\\mathbf{a _ i x}+e _ i \\\\rangle$ represents a sample from the instance $\\\\langle\\\\mathbf{A}, \\\\mathbf{e}\\\\rangle$.\\n### **R1: Response to the First Concern**\\nEach $e$ follows a different Gaussian distribution **does not** affect the hardness of the LWE problem. For instance, consider $<\\\\mathbf{a}, \\\\mathbf{ax} + e>$ and $<\\\\mathbf{a}, \\\\mathbf{ax} + e'>$. These can be viewed as belonging to **different** LWE instances, which does not compromise security. Intuitively, providing $\\\\lambda$ samples from an instance $<\\\\mathbf{A}, \\\\mathbf{e}>$ does not affect its security, and giving just a single sample will similarly have no impact on security.\\n\\n### **R2: Response to the Second Concern**\\n\\nThis issue can be easily resolved by making **a slight adjustment to the distribution of $\\\\mathbf{e}$**. Before we begin, Let us present a property of the multivariate Gaussian distribution:\\n\\n*If a random vector $\\\\mathbf{X}$ follows a multivariate Gaussian distribution, the zero elements of its covariance matrix indicate that the corresponding components are independent of each other.*\\n\\n\\nLet's consider the general scenario: given $l (> \\\\lambda)$ LWE samples, i.e., $\\\\langle \\\\mathbf{a} _ i, \\\\mathbf{a} _ i \\\\mathbf{x} + e _ i' \\\\rangle$, for $i \\\\in [1, l]$. Based on fundamental knowledge of linear systems, it is clear that, aside from the $\\\\lambda$ LWE samples with linearly independent $\\\\mathbf{a} _ i$, the remaining samples are redundant. Therefore, we focus our analysis on the $\\\\lambda$ LWE samples with linearly independent $\\\\mathbf{a} _ i$, i.e., $\\\\mathbf{A} \\\\in \\\\mathcal{Z} _ p^{\\\\lambda \\\\times \\\\lambda}$.\\n\\nGiven $\\\\mathbf{A}$ (**the public parameter**, i.e., $\\\\lambda$ linearly independent $\\\\mathbf{a} _ i$), we prove that when $\\\\mathbf{e} \\\\sim \\\\mathcal{N}(0, \\\\Sigma _ e=\\\\mathbf{A}^{-1}\\\\mathbf{D}\\\\mathbf{A}^{-T})$, where $\\\\Sigma _ e$ is the covariance matrix of $\\\\mathbf{e}$ and $\\\\mathbf{D}$ is a diagonal matrix, the elements of $\\\\mathbf{e}'=\\\\mathbf{A} \\\\mathbf{e}$ are independent:\\n\\nProof.\\n$\\\\mathbf{e}'$ follows a multivariate normal distribution, with its mean and covariance matrix given by:\\n\\n$$\\n\\\\mathbb{E}[\\\\mathbf{e}'] = \\\\mathbb{E}[\\\\mathbf{A}\\\\mathbf{e}] = \\\\mathbf{A}\\\\mathbb{E}[\\\\mathbf{e}] = \\\\mathbf{A} \\\\cdot 0 = 0\\n$$\\n$$\\n\\\\text{Cov}(\\\\mathbf{e}') = \\\\mathbf{A}\\\\text{Cov}(\\\\mathbf{e}) \\\\mathbf{A}^T = \\\\mathbf{A} \\\\Sigma _ e \\\\mathbf{A}^T= \\\\mathbf{A} \\\\mathbf{A}^{-1}\\\\mathbf{D}\\\\mathbf{A}^{-T} \\\\mathbf{A}^T=\\\\mathbf{D}\\n$$\\n\\nTherefore, each component of $\\\\mathbf{e}'$ **independently** follows a Gaussian distribution.\\n\\nWhen **selecting any** $\\\\lambda - 1$ or fewer LWE samples from the aforementioned $\\\\lambda$ samples, the added noise is also **independent** of each other and the amount of information obtainable is **even less** than that from all $\\\\lambda$ samples, thereby ensuring that privacy is not compromised.\\n\\nAfter the above adjustments, the standard deviation of the noise added to $x _ i$ is $\\\\sigma _ i = (\\\\mathbf{A}^{-1}\\\\mathbf{D}\\\\mathbf{A}^{-T}) _ {ii} = \\\\sigma _ i' \\\\sum _ {k=1}^\\\\lambda (\\\\mathbf{A}^{-1}) _ {ik}^2$. Since $p$ is a prime number, i.e., $\\\\gcd(\\\\sum _ {k=1}^\\\\lambda (\\\\mathbf{A}^{-1}) _ {ik}^2, p) = 1$, the map $\\\\sigma _ i' \\\\mapsto \\\\sigma _ i' \\\\sum _ {k=1}^\\\\lambda (\\\\mathbf{A}^{-1}) _ {ik}^2 \\\\mod p$ is a bijective mapping from $\\\\mathbb{Z} _ p$ to $\\\\mathbb{Z} _ p$. $\\\\sigma _ i'$ must satisfy $\\\\sigma _ i' > 2\\\\sqrt{\\\\lambda}$, which means that the range of invalid values for $\\\\sigma _ i'$ is **extremely small**. Therefore, $\\\\sigma _ i' \\\\sum _ {k=1}^\\\\lambda (\\\\mathbf{A}^{-1}) _ {ik}^2$ can basically take all values \\u200b\\u200bof $\\\\mathbb{Z} _ p$.\\nIt is evident that we can always choose a $\\\\sigma _ i'$ such that \\n$$\\\\sigma _ i' \\\\sum _ {k=1}^\\\\lambda (\\\\mathbf{A}^{-1}) _ {ik}^2 \\\\mod p < \\\\epsilon,$$ \\nwhere $\\\\epsilon$ is a small number representing the maximum noise that can be added to $x _ i$. \\n\\nFor example, in Fig. 6, the range of invalid values for $\\\\sigma _ i'$ is $[0, \\\\frac{2\\\\sqrt{1000}}{\\\\sqrt{2\\\\pi}} = 25]$, meaning that there are **only 26** specific values in $\\\\mathbb{Z} _ p$ that $\\\\sigma _ i' \\\\sum _ {k=1}^\\\\lambda (\\\\mathbf{A}^{-1}) _ {ik}^2$ cannot take. Since $\\\\epsilon = 8783$ still ensures that the added noise is negligible and there are only 26 values that cannot be taken, $\\\\sigma _ i' \\\\sum _ {k=1}^\\\\lambda (\\\\mathbf{A}^{-1}) _ {ik}^2 \\\\mod p$ can always take values within the range $[0, 8783]$. According to **R1**, the fact that the diagonal elements of $\\\\mathbf{D}$ are different does not compromise security. Thus, we can always construct a $\\\\mathbf{D}$ that not only ensures the noise added to $\\\\mathbf{x}$ is negligible but also guarantees that the components of $\\\\mathbf{e}'$ in the LWE instance are independent of each other and $\\\\sigma _ i' > 2\\\\sqrt{\\\\lambda}$.\"}",
"{\"title\": \"Response (Id GR2-10)\", \"comment\": \"Firstly, q (i.e., p in PVF) is **certainly not 32**. As is well known, $\\\\mathbb{Z}_p$ refers to the set $\\\\\\\\{ 0, 1, \\\\ldots, p-1 \\\\\\\\}$ for a large prime p. In this paper, the input space is 32-bit, meaning p is a **fixed 32-bit large prime**, specifically 4294967291.\\n\\nSecondly, in this paper, $v = \\\\lambda$, and $\\\\lambda$ can be fixed to a value in $[100, 1000]$.\\n\\nFinally, for the security requirements of PVF, please refer to Remark 3.\"}",
"{\"metareview\": \"The paper proposes $\\\\lambda$-SecAgg, a secure aggregation protocol for federated learning (FL), to reduce computational and communication overhead using Partial Vector Freezing (PVF). However, most reviewers raised their concerns that PVF significantly reduces this privacy. Despite extensive discussion on these issues, reviewers have maintained their reviews and scores. Given these issues, I recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers noted that this work lacks a standard notion of security. The authors provided a more detailed explanation but failed to address reviewers' concerns. I agree with Reviewer r5Dg that this paper still requires a large amount of work for its acceptance.\"}",
"{\"comment\": \"After our numerous discussions, your latest response led us to believe that we had successfully addressed your concerns. If there has been any misunderstanding, we sincerely apologize. Could you kindly highlight any remaining privacy-related issues with PVF? We will do our best to answer your questions.\"}",
"{\"comment\": \"Thanks for your response and the simple example in the general response 1.\\n\\nLet me clarify my comments with your example. I'd like to claim that $y^1_1 \\\\in \\\\mathbb{F}_p^2$ provides some information about $x^1_1 \\\\in \\\\mathbb{F}_p^3$ to the server even though the server cannot fully deduce $x^1_1$ from $y^1_1$.\\n\\nThis is because $H(x^1_1)=3log(p)$ while $H(x^1_1 | y^1_1)=log(p)$. Intuitively, before the server receives $y^1_1$, the number of free variables was 3 (as the number elements in $x^1_1$ is 3). After the server knows the $y^1_1$, however, the number of free variables is reduced to 1. \\n\\nPlease correct me if I miss something.\"}",
"{\"summary\": \"This paper devises a portable module named \\\\lamba-SecAgg for secure aggregation in federated learning. The authors also propose an extension involving disrupting variables to enhance privacy. Through extensive experiments, they demonstrate the efficiency of the proposed method, achieving up to 99.5\\u00d7 speedup and up to 32.3\\u00d7 communication reduction.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.Theoretical proofs.\\n2.The experimental results demonstrate that PVF achieved 99.5 \\\\times speedup and up to 32.3 \\\\times communication reduction.\", \"weaknesses\": \"1. Writing/technical issues:\\n(1) In the Introduction section, the author methioed that \\\"the minimal noise added by DP is insufficient to thwart attacks\\\", yet they also suggest considering \\\"DP in the extension for enhanced privacy.\\\" Is there a deeper reason or gap that I might have overlooked?\\n(2) The introduction of \\\"compression-based techniques\\\" in Figure 3 and Section 2 feels somewhat abrupt, primarily due to the lack of clarity in the classification of existing solutions outlined in the Introduction section. I suggest providing a clearer explanation of the criteria used to categorize the methods into secure aggregation techniques and compression-based techniques. Additionally, providing an analysis of the limitations of existing methods would help readers better understand the motivation behind the development of the PVF.\\n(3) The definition of adversary in threat model is not very clear, particularly regarding key aspects such as adversary knowledge and adversary capabilities, which have not been adequately explained or defined.\\n(4) Figure 4 is too abstract to understand.\\n(5) In Section 3.3, while discussing secure aggregation, it is noted that the requirements for data accuracy are relatively high. However, the introduction of DP typically involves adding noise to the data. It would be beneficial to clarify how the accuracy of the data can be maintained after noise has been added, particularly in the context of the freezing and melting processes.\\n\\n2. Experimental issues:\\n(1)The neural network architectures and datasets are not intruduced in the \\u2018Experimental settings\\u2019.\\n(2)The setting of (\\\\lambda = 100) in some experiments requires further explanation.\\n(3)The experimental validation, although comprehensive, is limited to specific neural network architectures and datasets. Its generalisability to other models and types of data may require further examination.\", \"questions\": \"Please refer to the weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}",
"{\"comment\": \"Dear Reviewer SAYu:\\n\\nRegarding your concerns about privacy, we have demonstrated in the latest version of the PDF (Appendix D.3, page 18) that $\\\\mathbf{y}^i$ does not reveal any information about $\\\\mathbf{x}^i$. This conclusion is theoretically supported by the hardness of LWE decision problem (Lemma 3). Additionally, a brief summary of the progress made in the rebuttal for our submission can be found in General Response 2. If you have any further questions, please do not hesitate to raise them.\\n\\nWe are looking forward to your response.\"}",
"{\"title\": \"Response (Id GR2-12)\", \"comment\": \"Dear Reviewer r5Dg:\\n\\nPlease refer to **line 270 (in Sec. 3.3)**, **Fig. 13**, **General Response 3**, or [this comment](https://openreview.net/forum?id=E1Tr7wTlIt¬eId=d9RgcLFX4B), where we have repeatedly emphasized that $\\\\mathbf{A}$, including $\\\\check{\\\\mathbf{A}}$, is **public** (as is $\\\\mathbf{A}$ in LWE) and remains **unchanged** throughout the entire training task. $\\\\check{\\\\mathbf{A}}$ is a **constant**, and therefore, $\\\\mathbf{e}'$ remains a random noise vector following the Gaussian distribution.\\n\\nIf you have any questions, please do not hesitate to raise them.\"}",
"{\"comment\": \"Dear Reviewer r5Dg:\\n\\nFrom the comments you provided, we understand that your concerns primarily stem from the possibility that $\\\\mathbf{y}=\\\\check{\\\\mathbf{A}}\\\\mathbf{x}$ in PVF might reveal partial (linear) relationships of $\\\\mathbf{x}$, i.e., information beyond \\\"*the aggregation of the private vectors of the honest parties*\\\". We kindly ask you to review **\\\"Enhanced version: Disrupting Variables Extension (Sec. 3.3)\\\"** in General response 3: Privacy Protection Overview.\\n\\nIn PVF with DVE, $\\\\mathbf{x}^i$ is added with noise $\\\\mathbf{e}$ through Eq. 10 and $\\\\mathbf{y}^i=\\\\check{\\\\mathbf{A}}(\\\\mathbf{x}^i+ \\\\mathbf{e})=\\\\check{\\\\mathbf{A}}\\\\mathbf{x}^i+ \\\\mathbf{e'}$ ($\\\\check{\\\\mathbf{A}}$ is public). Therefore:\\n* **The hardness of LWE search problem** ensures that the server cannot obtain any information about $\\\\mathbf{x}^i$ from $\\\\mathbf{y}^i$\\n* **The hardness of LWE decision problem** ensures that given a uniformly random vector $\\\\mathbf{w}^i$, $(\\\\check{\\\\mathbf{A}}, \\\\mathbf{y}^i)$ and $(\\\\check{\\\\mathbf{A}}, \\\\mathbf{w}^i)$ are indistinguishable.\\n\\nThat is to say, in PVF with DVE, user privacy **no longer relies on** the hardness of determining a specific solution to an under-determined system of linear equations but instead relies on **the hardness of LWE search and decision problem**. In other words, the phenomenon, where \\\"*the space of possible solutions of the linear system shrinks exponentially in the number of dimensions of private vectors compared with existing secure protocols*\\\", **does not occur** in our scheme.\\n\\nIf there are any aspects of our scheme that lack rigor, we kindly ask you to point them out.\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.